title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Mean field limits for nonlinear spatially extended hawkes processes with exponential memory kernels | We consider spatially extended systems of interacting nonlinear Hawkes
processes modeling large systems of neurons placed in Rd and study the
associated mean field limits. As the total number of neurons tends to infinity,
we prove that the evolution of a typical neuron, attached to a given spatial
position, can be described by a nonlinear limit differential equation driven by
a Poisson random measure. The limit process is described by a neural field
equation. As a consequence, we provide a rigorous derivation of the neural
field equation based on a thorough mean field analysis.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bias correction in daily maximum and minimum temperature measurements through Gaussian process modeling | The Global Historical Climatology Network-Daily database contains, among
other variables, daily maximum and minimum temperatures from weather stations
around the globe. It is long known that climatological summary statistics based
on daily temperature minima and maxima will not be accurate, if the bias due to
the time at which the observations were collected is not accounted for. Despite
some previous work, to our knowledge, there does not exist a satisfactory
solution to this important problem. In this paper, we carefully detail the
problem and develop a novel approach to address it. Our idea is to impute the
hourly temperatures at the location of the measurements by borrowing
information from the nearby stations that record hourly temperatures, which
then can be used to create accurate summaries of temperature extremes. The key
difficulty is that these imputations of the temperature curves must satisfy the
constraint of falling between the observed daily minima and maxima, and
attaining those values at least once in a twenty-four hour period. We develop a
spatiotemporal Gaussian process model for imputing the hourly measurements from
the nearby stations, and then develop a novel and easy to implement Markov
Chain Monte Carlo technique to sample from the posterior distribution
satisfying the above constraints. We validate our imputation model using hourly
temperature data from four meteorological stations in Iowa, of which one is
hidden and the data replaced with daily minima and maxima, and show that the
imputed temperatures recover the hidden temperatures well. We also demonstrate
that our model can exploit information contained in the data to infer the time
of daily measurements.
| 0 | 0 | 0 | 1 | 0 | 0 |
Mechanism of the double heterostructure TiO2/ZnO/TiO2 for photocatalytic and photovoltaic applications: A theoretical study | Understanding the mechanism of the heterojunction is an important step
towards controllable and tunable interfaces for photocatalytic and photovoltaic
based devices. To this aim, we propose a thorough study of a double
heterostructure system consisting of two semiconductors with large band gap,
namely, wurtzite ZnO and anatase TiO2. We demonstrate via first-principle
calculations two stable configurations of ZnO/TiO2 interfaces. Our structural
analysis provides a key information on the nature of the complex interface and
lattice distortions occurring when combining these materials. The study of the
electronic properties of the sandwich nanostructure TiO2/ZnO/TiO2 reveals that
conduction band arises mainly from Ti3d orbitals, while valence band is
maintained by O2p of ZnO, and that the trapped states within the gap region
frequent in single heterostructure are substantially reduced in the double
interface system. Moreover, our work explains the origin of certain optical
transitions observed in the experimental studies. Unexpectedly, as a
consequence of different bond distortions, the results on the band alignments
show electron accumulation in the left shell of TiO2 rather than the right one.
Such behavior provides more choice for the sensitization and functionalization
of TiO2 surfaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
Maximum Number of Common Zeros of Homogeneous Polynomials over Finite Fields | About two decades ago, Tsfasman and Boguslavsky conjectured a formula for the
maximum number of common zeros that $r$ linearly independent homogeneous
polynomials of degree $d$ in $m+1$ variables with coefficients in a finite
field with $q$ elements can have in the corresponding $m$-dimensional
projective space. Recently, it has been shown by Datta and Ghorpade that this
conjecture is valid if $r$ is at most $m+1$ and can be invalid otherwise.
Moreover a new conjecture was proposed for many values of $r$ beyond $m+1$. In
this paper, we prove that this new conjecture holds true for several values of
$r$. In particular, this settles the new conjecture completely when $d=3$. Our
result also includes the positive result of Datta and Ghorpade as a special
case. Further, we determine the maximum number of zeros in certain cases not
covered by the earlier conjectures and results, namely, the case of $d=q-1$ and
of $d=q$. All these results are directly applicable to the determination of the
maximum number of points on sections of Veronese varieties by linear
subvarieties of a fixed dimension, and also the determination of generalized
Hamming weights of projective Reed-Muller codes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Wave-induced vortex recoil and nonlinear refraction | When a vortex refracts surface waves, the momentum flux carried by the waves
changes direction and the waves induce a reaction force on the vortex. We study
experimentally the resulting vortex distortion. Incoming surface gravity waves
impinge on a steady vortex of velocity $U_0$ driven magneto-hydrodynamically at
the bottom of a fluid layer. The waves induce a shift of the vortex center in
the direction transverse to wave propagation, together with a decrease in
surface vorticity. We interpret these two phenomena in the framework introduced
by Craik and Leibovich (1976): we identify the dimensionless Stokes drift
$S=U_s/U_0$ as the relevant control parameter, $U_s$ being the Stokes drift
velocity of the waves. We propose a simple vortex line model which indicates
that the shift of the vortex center originates from a balance between vorticity
advection by the Stokes drift and self-advection of the vortex. The decrease in
surface vorticity is interpreted as a consequence of vorticity expulsion by the
fast Stokes drift, which confines it at depth. This purely hydrodynamic process
is analogous to the magnetohydrodynamic expulsion of magnetic field by a
rapidly moving conductor through the electromagnetic skin effect. We study
vorticity expulsion in the limit of fast Stokes drift and deduce that the
surface vorticity decreases as $1/S$, a prediction which is compatible with the
experimental data. Such wave-induced vortex distortions have important
consequences for the nonlinear regime of wave refraction: the refraction angle
rapidly decreases with wave intensity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Transport of Intensity Equation Microscopy for Dynamic Microtubules | Microtubules (MTs) are filamentous protein polymers roughly 25 nm in
diameter. Ubiquitous in eukaryotes, MTs are well known for their structural
role but also act as actuators, sensors, and, in association with other
proteins, checkpoint regulators. The thin diameter and transparency of
microtubules classifies them as sub-resolution phase objects, with concomitant
imaging challenges. Label-free methods for imaging microtubules are preferred
when long exposure times would lead to phototoxicity in fluorescence, or for
retaining more native structure and activity.
This method approaches quantitative phase imaging of MTs as an inverse
problem based on the Transport of Intensity Equation. In a co-registered
comparison of MT signal-to-background-noise ratio, TIE Microscopy of MTs shows
an improvement of more than three times that of video-enhanced bright field
imaging.
This method avoids the anisotropy caused by prisms used in differential
interference contrast and takes only two defocused images as input. Unlike
other label-free techniques for imaging microtubules, in TIE microscopy
background removal is a natural consequence of taking the difference of two
defocused images, so the need to frequently update a background image is
eliminated.
| 0 | 1 | 0 | 0 | 0 | 0 |
First-principles prediction of the stacking fault energy of gold at finite temperature | The intrinsic stacking fault energy (ISFE) $\gamma$ is a material parameter
fundamental to the discussion of plastic deformation mechanisms in metals.
Here, we scrutinize the temperature dependence of the ISFE of Au through
accurate first-principles derived Helmholtz free energies employing both the
super cell approach and the axial Ising model (AIM). A significant decrease of
the ISFE with temperature, $-(36$-$39)$\,\% from 0 to 890\,K depending on the
treatment of thermal expansion, is revealed, which matches the estimate based
on the experimental temperature coefficient $d \gamma / d T $ closely. We make
evident that this decrease predominantly originates from the excess vibrational
entropy at the stacking fault layer, although the contribution arising from the
static lattice expansion compensates it by approximately 60\,\%. Electronic
excitations are found to be of minor importance for the ISFE change with
temperature. We show that the Debye model in combination with the AIM captures
the correct sign but significantly underestimates the magnitude of the
vibrational contribution to $\gamma(T)$. The hexagonal close-packed (hcp) and
double hcp structures are established as metastable phases of Au. Our results
demonstrate that quantitative agreement with experiments can be obtained if all
relevant temperature-induced excitations are considered in first-principles
modeling and that the temperature dependence of the ISFE is substantial enough
to be taken into account in crystal plasticity modeling.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mathematical Programming formulations for the efficient solution of the $k$-sum approval voting problem | In this paper we address the problem of electing a committee among a set of
$m$ candidates and on the basis of the preferences of a set of $n$ voters. We
consider the approval voting method in which each voter can approve as many
candidates as she/he likes by expressing a preference profile (boolean
$m$-vector). In order to elect a committee, a voting rule must be established
to `transform' the $n$ voters' profiles into a winning committee. The problem
is widely studied in voting theory; for a variety of voting rules the problem
was shown to be computationally difficult and approximation algorithms and
heuristic techniques were proposed in the literature. In this paper we follow
an Ordered Weighted Averaging approach and study the $k$-sum approval voting
(optimization) problem in the general case $1 \leq k <n$. For this problem we
provide different mathematical programming formulations that allow us to solve
it in an exact solution framework. We provide computational results showing
that our approach is efficient for medium-size test problems ($n$ up to 200,
$m$ up to 60) since in all tested cases it was able to find the exact optimal
solution in very short computational times.
| 1 | 0 | 1 | 0 | 0 | 0 |
DAGs with NO TEARS: Continuous Optimization for Structure Learning | Estimating the structure of directed acyclic graphs (DAGs, also known as
Bayesian networks) is a challenging problem since the search space of DAGs is
combinatorial and scales superexponentially with the number of nodes. Existing
approaches rely on various local heuristics for enforcing the acyclicity
constraint. In this paper, we introduce a fundamentally different strategy: We
formulate the structure learning problem as a purely \emph{continuous}
optimization problem over real matrices that avoids this combinatorial
constraint entirely. This is achieved by a novel characterization of acyclicity
that is not only smooth but also exact. The resulting problem can be
efficiently solved by standard numerical algorithms, which also makes
implementation effortless. The proposed method outperforms existing ones,
without imposing any structural assumptions on the graph such as bounded
treewidth or in-degree. Code implementing the proposed algorithm is open-source
and publicly available at this https URL.
| 0 | 0 | 0 | 1 | 0 | 0 |
Throughput-Optimal Broadcast in Wireless Networks with Point-to-Multipoint Transmissions | We consider the problem of efficient packet dissemination in wireless
networks with point-to-multi-point wireless broadcast channels. We propose a
dynamic policy, which achieves the broadcast capacity of the network. This
policy is obtained by first transforming the original multi-hop network into a
precedence-relaxed virtual single-hop network and then finding an optimal
broadcast policy for the relaxed network. The resulting policy is shown to be
throughput-optimal for the original wireless network using a sample-path
argument. We also prove the NP-completeness of the finite-horizon broadcast
problem, which is in contrast with the polynomial time solvability of the
problem with point-to-point channels. Illustrative simulation results
demonstrate the efficacy of the proposed broadcast policy in achieving the full
broadcast capacity with low delay.
| 1 | 0 | 1 | 0 | 0 | 0 |
The Resilience of Life to Astrophysical Events | Much attention has been given in the literature to the effects of
astrophysical events on human and land-based life. However, little has been
discussed on the resilience of life itself. Here we instead explore the
statistics of events that completely sterilise an Earth-like planet with planet
radii in the range $0.5-1.5 R_{Earth}$ and temperatures of $\sim 300 \;
\text{K}$, eradicating all forms of life. We consider the relative likelihood
of complete global sterilisation events from three astrophysical sources --
supernovae, gamma-ray bursts, large asteroid impacts, and passing-by stars. To
assess such probabilities we consider what cataclysmic event could lead to the
annihilation of not just human life, but also extremophiles, through the
boiling of all water in Earth's oceans. Surprisingly we find that although
human life is somewhat fragile to nearby events, the resilience of Ecdysozoa
such as \emph{Milnesium tardigradum} renders global sterilisation an unlikely
event.
| 0 | 1 | 0 | 0 | 0 | 0 |
Kinky DNA in solution: Small angle scattering study of a nucleosome positioning sequence | DNA is a flexible molecule, but the degree of its flexibility is subject to
debate. The commonly-accepted persistence length of $l_p \approx 500\,$\AA\ is
inconsistent with recent studies on short-chain DNA that show much greater
flexibility but do not probe its origin. We have performed X-ray and neutron
small-angle scattering on a short DNA sequence containing a strong nucleosome
positioning element, and analyzed the results using a modified Kratky-Porod
model to determine possible conformations. Our results support a hypothesis
from Crick and Klug in 1975 that some DNA sequences in solution can have sharp
kinks, potentially resolving the discrepancy. Our conclusions are supported by
measurements on a radiation-damaged sample, where single-strand breaks lead to
increased flexibility and by an analysis of data from another sequence, which
does not have kinks, but where our method can detect a locally enhanced
flexibility due to an $AT$-domain.
| 0 | 0 | 0 | 0 | 1 | 0 |
Explaining Aviation Safety Incidents Using Deep Temporal Multiple Instance Learning | Although aviation accidents are rare, safety incidents occur more frequently
and require a careful analysis to detect and mitigate risks in a timely manner.
Analyzing safety incidents using operational data and producing event-based
explanations is invaluable to airline companies as well as to governing
organizations such as the Federal Aviation Administration (FAA) in the United
States. However, this task is challenging because of the complexity involved in
mining multi-dimensional heterogeneous time series data, the lack of
time-step-wise annotation of events in a flight, and the lack of scalable tools
to perform analysis over a large number of events. In this work, we propose a
precursor mining algorithm that identifies events in the multidimensional time
series that are correlated with the safety incident. Precursors are valuable to
systems health and safety monitoring and in explaining and forecasting safety
incidents. Current methods suffer from poor scalability to high dimensional
time series data and are inefficient in capturing temporal behavior. We propose
an approach by combining multiple-instance learning (MIL) and deep recurrent
neural networks (DRNN) to take advantage of MIL's ability to learn using weakly
supervised data and DRNN's ability to model temporal behavior. We describe the
algorithm, the data, the intuition behind taking a MIL approach, and a
comparative analysis of the proposed algorithm with baseline models. We also
discuss the application to a real-world aviation safety problem using data from
a commercial airline company and discuss the model's abilities and
shortcomings, with some final remarks about possible deployment directions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning Deep CNN Denoiser Prior for Image Restoration | Model-based optimization methods and discriminative learning methods have
been the two dominant strategies for solving various inverse problems in
low-level vision. Typically, those two kinds of methods have their respective
merits and drawbacks, e.g., model-based optimization methods are flexible for
handling different inverse problems but are usually time-consuming with
sophisticated priors for the purpose of good performance; in the meanwhile,
discriminative learning methods have fast testing speed but their application
range is greatly restricted by the specialized task. Recent works have revealed
that, with the aid of variable splitting techniques, denoiser prior can be
plugged in as a modular part of model-based optimization methods to solve other
inverse problems (e.g., deblurring). Such an integration induces considerable
advantage when the denoiser is obtained via discriminative learning. However,
the study of integration with fast discriminative denoiser prior is still
lacking. To this end, this paper aims to train a set of fast and effective CNN
(convolutional neural network) denoisers and integrate them into model-based
optimization method to solve other inverse problems. Experimental results
demonstrate that the learned set of denoisers not only achieve promising
Gaussian denoising results but also can be used as prior to deliver good
performance for various low-level vision applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Challenge of Spin-Orbit-Tuned Ground States in Iridates | Effects of spin-orbit interactions in condensed matter are an important and
rapidly evolving topic. Strong competition between spin-orbit, on-site Coulomb
and crystalline electric field interactions in iridates drives exotic quantum
states that are unique to this group of materials. In particular, the Jeff =
1/2 Mott state served as an early signal that the combined effect of strong
spin-orbit and Coulomb interactions in iridates has unique, intriguing
consequences. In this Key Issues Review, we survey some current experimental
studies of iridates. In essence, these materials tend to defy conventional
wisdom: absence of conventional correlations between magnetic and insulating
states, avoidance of metallization at high pressures, S-shaped I-V
characteristic, emergence of an odd-parity hidden order, etc. It is
particularly intriguing that there exist conspicuous discrepancies between
current experimental results and theoretical proposals that address
superconducting, topological and quantum spin liquid phases. This class of
materials, in which the lattice degrees of freedom play a critical role seldom
seen in other materials, evidently presents some profound intellectual
challenges that call for more investigations both experimentally and
theoretically. Physical properties unique to these materials may help unlock a
world of possibilities for functional materials and devices. We emphasize that,
given the rapidly developing nature of this field, this Key Issues Review is by
no means an exhaustive report of the current state of experimental studies of
iridates.
| 0 | 1 | 0 | 0 | 0 | 0 |
Derived Picard groups of preprojective algebras of Dynkin type | In this paper, we study two-sided tilting complexes of preprojective algebras
of Dynkin type. We construct the most fundamental class of two-sided tilting
complexes, which has a group structure by derived tensor products and induces a
group of auto-equivalences of the derived category. We show that the group
structure of the two-sided tilting complexes is isomorphic to the braid group
of the corresponding folded graph. Moreover we show that these two-sided
tilting complexes induce tilting mutation and any tilting complex is given as
the derived tensor products of them. Using these results, we determine the
derived Picard group of preprojective algebras for type $A$ and $D$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Semi-supervised Learning in Network-Structured Data via Total Variation Minimization | We propose and analyze a method for semi-supervised learning from
partially-labeled network-structured data. Our approach is based on a graph
signal recovery interpretation under a clustering hypothesis that labels of
data points belonging to the same well-connected subset (cluster) are similar
valued. This lends naturally to learning the labels by total variation (TV)
minimization, which we solve by applying a recently proposed primal-dual method
for non-smooth convex optimization. The resulting algorithm allows for a highly
scalable implementation using message passing over the underlying empirical
graph, which renders the algorithm suitable for big data applications. By
applying tools of compressed sensing, we derive a sufficient condition on the
underlying network structure such that TV minimization recovers clusters in the
empirical graph of the data. In particular, we show that the proposed
primal-dual method amounts to maximizing network flows over the empirical graph
of the dataset. Moreover, the learning accuracy of the proposed algorithm is
linked to the set of network flows between data points having known labels. The
effectiveness and scalability of our approach is verified by numerical
experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Susceptibility of Methicillin Resistant Staphylococcus aureus to Vancomycin using Liposomal Drug Delivery System | Staphylococcus aureus responsible for nosocomial infections is a significant
threat to the public health. The increasing resistance of S.aureus to various
antibiotics has drawn it to a prime focus for research on designing an
appropriate drug delivery system. Emergence of Methicillin Resistant
Staphylococcus aureus (MRSA) in 1961, necessitated the use of vancomycin "the
drug of last resort" to treat these infections. Unfortunately, S.aureus has
already started gaining resistances to vancomycin. Liposome encapsulation of
drugs have been earlier shown to provide an efficient method of microbial
inhibition in many cases. We have studied the effect of liposome encapsulated
vancomycin on MRSA and evaluated the antibacterial activity of the
liposome-entrapped drug in comparison to that of the free drug based on the
minimum inhibitory concentration (MIC) of the drug. The MIC for liposomal
vancomycin was found to be about half of that of free vancomycin. The growth
response of MRSA showed that the liposomal vancomycin induced the culture to go
into bacteriostatic state and phagocytic killing was enhanced. Administration
of the antibiotic encapsulated in liposome thus was shown to greatly improve
the drug delivery as well as the drug resistance caused by MRSA.
| 0 | 0 | 0 | 0 | 1 | 0 |
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts | Understanding how ideas relate to each other is a fundamental question in
many domains, ranging from intellectual history to public communication.
Because ideas are naturally embedded in texts, we propose the first framework
to systematically characterize the relations between ideas based on their
occurrence in a corpus of documents, independent of how these ideas are
represented. Combining two statistics --- cooccurrence within documents and
prevalence correlation over time --- our approach reveals a number of different
ways in which ideas can cooperate and compete. For instance, two ideas can
closely track each other's prevalence over time, and yet rarely cooccur, almost
like a "cold war" scenario. We observe that pairwise cooccurrence and
prevalence correlation exhibit different distributions. We further demonstrate
that our approach is able to uncover intriguing relations between ideas through
in-depth case studies on news articles and research papers.
| 1 | 1 | 0 | 0 | 0 | 0 |
Deep Feature Learning for Graphs | This paper presents a general graph representation learning framework called
DeepGL for learning deep node and edge representations from large (attributed)
graphs. In particular, DeepGL begins by deriving a set of base features (e.g.,
graphlet features) and automatically learns a multi-layered hierarchical graph
representation where each successive layer leverages the output from the
previous layer to learn features of a higher-order. Contrary to previous work,
DeepGL learns relational functions (each representing a feature) that
generalize across-networks and therefore useful for graph-based transfer
learning tasks. Moreover, DeepGL naturally supports attributed graphs, learns
interpretable features, and is space-efficient (by learning sparse feature
vectors). In addition, DeepGL is expressive, flexible with many interchangeable
components, efficient with a time complexity of $\mathcal{O}(|E|)$, and
scalable for large networks via an efficient parallel implementation. Compared
with the state-of-the-art method, DeepGL is (1) effective for across-network
transfer learning tasks and attributed graph representation learning, (2)
space-efficient requiring up to 6x less memory, (3) fast with up to 182x
speedup in runtime performance, and (4) accurate with an average improvement of
20% or more on many learning tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Pseudospectral Model Predictive Control under Partially Learned Dynamics | Trajectory optimization of a controlled dynamical system is an essential part
of autonomy, however many trajectory optimization techniques are limited by the
fidelity of the underlying parametric model. In the field of robotics, a lack
of model knowledge can be overcome with machine learning techniques, utilizing
measurements to build a dynamical model from the data. This paper aims to take
the middle ground between these two approaches by introducing a semi-parametric
representation of the underlying system dynamics. Our goal is to leverage the
considerable information contained in a traditional physics based model and
combine it with a data-driven, non-parametric regression technique known as a
Gaussian Process. Integrating this semi-parametric model with model predictive
pseudospectral control, we demonstrate this technique on both a cart pole and
quadrotor simulation with unmodeled damping and parametric error. In order to
manage parametric uncertainty, we introduce an algorithm that utilizes Sparse
Spectrum Gaussian Processes (SSGP) for online learning after each rollout. We
implement this online learning technique on a cart pole and quadrator, then
demonstrate the use of online learning and obstacle avoidance for the dubin
vehicle dynamics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Phonon-assisted oscillatory exciton dynamics in monolayer MoSe2 | In monolayer semiconductor transition metal dichalcogenides, the
exciton-phonon interaction is expected to strongly affect the photocarrier
dynamics. Here, we report on an unusual oscillatory enhancement of the neutral
exciton photoluminescence with the excitation laser frequency in monolayer
MoSe2. The frequency of oscillation matches that of the M-point longitudinal
acoustic phonon, LA(M). Oscillatory behavior is also observed in the
steady-state emission linewidth and in timeresolved photoluminescence
excitation data, which reveals variation with excitation energy in the exciton
lifetime. These results clearly expose the key role played by phonons in the
exciton formation and relaxation dynamics of two-dimensional van der Waals
semiconductors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Manifold Regularization for Kernelized LSTD | Policy evaluation or value function or Q-function approximation is a key
procedure in reinforcement learning (RL). It is a necessary component of policy
iteration and can be used for variance reduction in policy gradient methods.
Therefore its quality has a significant impact on most RL algorithms. Motivated
by manifold regularized learning, we propose a novel kernelized policy
evaluation method that takes advantage of the intrinsic geometry of the state
space learned from data, in order to achieve better sample efficiency and
higher accuracy in Q-function approximation. Applying the proposed method in
the Least-Squares Policy Iteration (LSPI) framework, we observe superior
performance compared to widely used parametric basis functions on two standard
benchmarks in terms of policy quality.
| 1 | 0 | 0 | 1 | 0 | 0 |
Normality of the Thue--Morse sequence along Piatetski-Shapiro sequences | We prove that for $1<c<4/3$ the subsequence of the Thue--Morse sequence
$\mathbf t$ indexed by $\lfloor n^c\rfloor$ defines a normal sequence, that is,
each finite sequence $(\varepsilon_0,\ldots,\varepsilon_{T-1})\in \{0,1\}^T$
occurs as a contiguous subsequence of the sequence $n\mapsto \mathbf
t\left(\lfloor n^c\rfloor\right)$ with asymptotic frequency $2^{-T}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Radio Weak Lensing Shear Measurement in the Visibility Domain - II. Source Extraction | This paper extends the method introduced in Rivi et al. (2016b) to measure
galaxy ellipticities in the visibility domain for radio weak lensing surveys.
In that paper we focused on the development and testing of the method for the
simple case of individual galaxies located at the phase centre, and proposed to
extend it to the realistic case of many sources in the field of view by
isolating visibilities of each source with a faceting technique. In this second
paper we present a detailed algorithm for source extraction in the visibility
domain and show its effectiveness as a function of the source number density by
running simulations of SKA1-MID observations in the band 950-1150 MHz and
comparing original and measured values of galaxies' ellipticities. Shear
measurements from a realistic population of 10^4 galaxies randomly located in a
field of view of 1 deg^2 (i.e. the source density expected for the current
radio weak lensing survey proposal with SKA1) are also performed. At SNR >= 10,
the multiplicative bias is only a factor 1.5 worse than what found when
analysing individual sources, and is still comparable to the bias values
reported for similar measurement methods at optical wavelengths. The additive
bias is unchanged from the case of individual sources, but is significantly
larger than typically found in optical surveys. This bias depends on the shape
of the uv coverage and we suggest that a uv-plane weighting scheme to produce a
more isotropic shape could reduce and control additive bias.
| 0 | 1 | 0 | 0 | 0 | 0 |
Loop Representation of Wigner's Little Groups | Wigner's little groups are the subgroups of the Lorentz group whose
transformations leave the momentum of a given particle invariant. They thus
define the internal space-time symmetries of relativistic particles. These
symmetries take different mathematical forms for massive and for massless
particles. However, it is shown possible to construct one unified
representation using a graphical description. This graphical approach allows us
to describe vividly parity, time reversal, and charge conjugation of the
internal symmetry groups. As for the language of group theory, the two-by-two
representation is used throughout the paper. While this two-by-two
representation is for spin-1/2 particles, it is shown possible to construct the
representations for spin-0 particles, spin-1 particles, as well as for
higher-spin particles, for both massive and massless cases. It is shown also
that the four-by-four Dirac matrices constitute a two-by-two representation of
Wigner's little group.
| 0 | 1 | 0 | 0 | 0 | 0 |
MOA Data Reveal a New Mass, Distance, and Relative Proper Motion for Planetary System OGLE-2015-BLG-0954L | We present the MOA Collaboration light curve data for planetary microlensing
event OGLE-2015-BLG-0954, which was previously announced in a paper by the
KMTNet and OGLE Collaborations. The MOA data cover the caustic exit, which was
not covered by the KMTNet or OGLE data, and they provide a more reliable
measurement of the finite source effect. The MOA data also provide a new source
color measurement that reveals a lens-source relative proper motion of
$\mu_{\rm rel} = 11.8\pm 0.8\,$mas/yr, which compares to the value of $\mu_{\rm
rel} = 18.4\pm 1.7\,$mas/yr reported in the KMTNet-OGLE paper. This new MOA
value for $\mu_{\rm rel}$ has an a priori probability that is a factor of $\sim
100$ times larger than the previous value, and it does not require a lens
system distance of $D_L < 1\,$kpc. Based on the corrected source color, we find
that the lens system consists of a planet of mass $3.4^{+3.7}_{-1.6} M_{\rm
Jup}$ orbiting a $0.30^{+0.34}_{-0.14}M_\odot$ star at an orbital separation of
$2.1^{+2.2}_{-1.0}\,$AU and a distance of $1.2^{+1.1}_{-0.5}\,$kpc.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ground state degeneracy in quantum spin systems protected by crystal symmetries | We develop a no-go theorem for two-dimensional bosonic systems with crystal
symmetries: if there is a half-integer spin at a rotation center, where the
point-group symmetry is $\mathbb D_{2,4,6}$, such a system must have a
ground-state degeneracy protected by the crystal symmetry. Such a degeneracy
indicates either a broken-symmetry state or a unconventional state of matter.
Comparing to the Lieb-Schultz-Mattis Theorem, our result counts the spin at
each rotation center, instead of the total spin per unit cell, and therefore
also applies to certain systems with an even number of half-integer spins per
unit cell.
| 0 | 1 | 0 | 0 | 0 | 0 |
Introspection: Accelerating Neural Network Training By Learning Weight Evolution | Neural Networks are function approximators that have achieved
state-of-the-art accuracy in numerous machine learning tasks. In spite of their
great success in terms of accuracy, their large training time makes it
difficult to use them for various tasks. In this paper, we explore the idea of
learning weight evolution pattern from a simple network for accelerating
training of novel neural networks. We use a neural network to learn the
training pattern from MNIST classification and utilize it to accelerate
training of neural networks used for CIFAR-10 and ImageNet classification. Our
method has a low memory footprint and is computationally efficient. This method
can also be used with other optimizers to give faster convergence. The results
indicate a general trend in the weight evolution during training of neural
networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Single crystal polarized neutron diffraction study of the magnetic structure of HoFeO$_3$ | Polarised neutron diffraction measurements have been made on HoFeO$_3$ single
crystals magnetised in both the [001] and [100] directions ($Pbnm$ setting).
The polarisation dependencies of Bragg reflection intensities were measured
both with a high field of H = 9 T parallel to [001] at T = 70 K and with the
lower field H = 0.5 T parallel to [100] at T = 5, 15, 25~K. A Fourier
projection of magnetization induced parallel to [001], made using the $hk0$
reflections measured in 9~T, indicates that almost all of it is due to
alignment of Ho moments. Further analysis of the asymmetries of general
reflections in these data showed that although, at 70~K, 9~T applied parallel
to [001] hardly perturbs the antiferromagnetic order of the Fe sublattices, it
induces significant antiferromagnetic order of the Ho sublattices in the
$x\mhyphen y$ plane, with the antiferromagnetic components of moment having the
same order of magnitude as the induced ferromagnetic ones. Strong intensity
asymmetries measured in the low temperature $\Gamma_2$ structure with a lower
field, 0.5 T $\parallel$ [100] allowed the variation of the ordered components
of the Ho and Fe moments to be followed. Their absolute orientations, in the
180\degree\ domain stabilised by the field were determined relative to the
distorted perovskite structure,. This relationship fixes the sign of the
Dzyalshinski-Moriya (D-M) interaction which leads to the weak ferromagnetism.
Our results indicate that the combination of strong y-axis anisotropy of the Ho
moments and Ho-Fe exchange interactions breaks the centrosymmetry of the
structure and could lead to ferroelectric polarization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Rigidity of volume-minimizing hypersurfaces in Riemannian 5-manifolds | In this paper we generalize the main result of [4] for manifolds that are not
necessarily Einstein. In fact, we obtain an upper bound for the volume of a
locally volume-minimizing closed hypersurface $\Sigma$ of a Riemannian
5-manifold $M$ with scalar curvature bounded from below by a positive constant
in terms of the total traceless Ricci curvature of $\Sigma$. Furthermore, if
$\Sigma$ saturates the respective upper bound and $M$ has nonnegative Ricci
curvature, then $\Sigma$ is isometric to $\mathbb{S}^4$ up to scaling and $M$
splits in a neighborhood of $\Sigma$. Also, we obtain a rigidity result for the
Riemannian cover of $M$ when $\Sigma$ minimizes the volume in its homotopy
class and saturates the upper bound.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dolha - an Efficient and Exact Data Structure for Streaming Graphs | A streaming graph is a graph formed by a sequence of incoming edges with time
stamps. Unlike static graphs, the streaming graph is highly dynamic and time
related. In the real world, the high volume and velocity streaming graphs such
as internet traffic data, social network communication data and financial
transfer data are bringing challenges to the classic graph data structures. We
present a new data structure: double orthogonal list in hash table (Dolha)
which is a high speed and high memory efficiency graph structure applicable to
streaming graph. Dolha has constant time cost for single edge and near linear
space cost that we can contain billions of edges information in memory size and
process an incoming edge in nanoseconds. Dolha also has linear time cost for
neighborhood queries, which allow it to support most algorithms in graphs
without extra cost. We also present a persistent structure based on Dolha that
has the ability to handle the sliding window update and time related queries.
| 1 | 0 | 0 | 0 | 0 | 0 |
Semi-classical states for the nonlinear Choquard equations: existence, multiplicity and concentration at a potential well | We study existence and multiplicity of semi-classical states for the
nonlinear Choquard equation:
$$ -\varepsilon^2\Delta v+V(x)v =
\frac{1}{\varepsilon^\alpha}(I_\alpha*F(v))f(v) \quad \hbox{in}\ \mathbb{R}^N,
$$ where $N\geq 3$, $\alpha\in (0,N)$, $I_\alpha(x)={A_\alpha\over
|x|^{N-\alpha}}$ is the Riesz potential, $F\in C^1(\mathbb{R},\mathbb{R})$,
$F'(s) = f(s)$ and $\varepsilon>0$ is a small parameter.
We develop a new variational approach and we show the existence of a family
of solutions concentrating, as $\varepsilon\to 0$, to a local minima of $V(x)$
under general conditions on $F(s)$. Our result is new also for
$f(s)=|s|^{p-2}s$ and applicable for $p\in (\frac{N+\alpha}{N},
\frac{N+\alpha}{N-2})$. Especially, we can give the existence result for
locally sublinear case $p\in (\frac{N+\alpha}{N}, 2)$, which gives a positive
answer to an open problem arisen in recent works of Moroz and Van Schaftingen.
We also study the multiplicity of positive single-peak solutions and we show
the existence of at least $\hbox{cupl}(K)+1$ solutions concentrating around $K$
as $\varepsilon\to 0$, where $K\subset \Omega$ is the set of minima of $V(x)$
in a bounded potential well $\Omega$, that is, $m_0 \equiv \inf_{x\in \Omega}
V(x) < \inf_{x\in \partial\Omega}V(x)$ and $K=\{x\in\Omega;\, V(x)=m_0\}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Results from the first cryogenic NaI detector for the COSINUS project | Recently there is a flourishing and notable interest in the crystalline
scintillator material sodium iodide (NaI) as target for direct dark matter
searches. This is mainly driven by the long-reigning contradicting situation in
the dark matter sector: the positive evidence for the detection of a dark
matter modulation signal claimed by the DAMA/LIBRA collaboration is (under
so-called standard assumptions) inconsistent with the null-results reported by
most of the other direct dark matter experiments. We present the results of a
first prototype detector using a new experimental approach in comparison to
\textit{conventional} single-channel NaI scintillation light detectors: a NaI
crystal operated as a scintillating calorimeter at milli-Kelvin temperatures
simultaneously providing a phonon (heat) plus scintillation light signal and
particle discrimination on an event-by-event basis. We evaluate energy
resolution, energy threshold and further performance parameters of this
prototype detector developed within the COSINUS R&D project.
| 0 | 1 | 0 | 0 | 0 | 0 |
Minimal Representations of Lie Algebras With Non-Trivial Levi Decomposition | We obtain minimal dimension matrix representations for each of the Lie
algebras of dimension five, six, seven, and eight obtained by Turkowski that
have a non-trivial Levi decomposition. The Key technique involves using
subspace associated to a particular representation of semi-simple Lie algebra
to help in the construction of the radical in the putative Levi decomposition.
| 0 | 0 | 1 | 0 | 0 | 0 |
Forecasting day-ahead electricity prices in Europe: the importance of considering market integration | Motivated by the increasing integration among electricity markets, in this
paper we propose two different methods to incorporate market integration in
electricity price forecasting and to improve the predictive performance. First,
we propose a deep neural network that considers features from connected markets
to improve the predictive accuracy in a local market. To measure the importance
of these features, we propose a novel feature selection algorithm that, by
using Bayesian optimization and functional analysis of variance, evaluates the
effect of the features on the algorithm performance. In addition, using market
integration, we propose a second model that, by simultaneously predicting
prices from two markets, improves the forecasting accuracy even further. As a
case study, we consider the electricity market in Belgium and the improvements
in forecasting accuracy when using various French electricity features. We show
that the two proposed models lead to improvements that are statistically
significant. Particularly, due to market integration, the predictive accuracy
is improved from 15.7% to 12.5% sMAPE (symmetric mean absolute percentage
error). In addition, we show that the proposed feature selection algorithm is
able to perform a correct assessment, i.e. to discard the irrelevant features.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Longitudinal Higher-Order Diagnostic Classification Model | Providing diagnostic feedback about growth is crucial to formative decisions
such as targeted remedial instructions or interventions. This paper proposed a
longitudinal higher-order diagnostic classification modeling approach for
measuring growth. The new modeling approach is able to provide quantitative
values of overall and individual growth by constructing a multidimensional
higher-order latent structure to take into account the correlations among
multiple latent attributes that are examined across different occasions. In
addition, potential local item dependence among anchor (or repeated) items can
also be taken into account. Model parameter estimation is explored in a
simulation study. An empirical example is analyzed to illustrate the
applications and advantages of the proposed modeling approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
On Fairness and Calibration | The machine learning community has become increasingly concerned with the
potential for bias and discrimination in predictive models. This has motivated
a growing line of work on what it means for a classification procedure to be
"fair." In this paper, we investigate the tension between minimizing error
disparity across different population groups while maintaining calibrated
probability estimates. We show that calibration is compatible only with a
single error constraint (i.e. equal false-negatives rates across groups), and
show that any algorithm that satisfies this relaxation is no better than
randomizing a percentage of predictions for an existing classifier. These
unsettling findings, which extend and generalize existing results, are
empirically confirmed on several datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
AADS: Augmented Autonomous Driving Simulation using Data-driven Algorithms | Simulation systems have become an essential component in the development and
validation of autonomous driving technologies. The prevailing state-of-the-art
approach for simulation is to use game engines or high-fidelity computer
graphics (CG) models to create driving scenarios. However, creating CG models
and vehicle movements (e.g., the assets for simulation) remains a manual task
that can be costly and time-consuming. In addition, the fidelity of CG images
still lacks the richness and authenticity of real-world images and using these
images for training leads to degraded performance.
In this paper we present a novel approach to address these issues: Augmented
Autonomous Driving Simulation (AADS). Our formulation augments real-world
pictures with a simulated traffic flow to create photo-realistic simulation
images and renderings. More specifically, we use LiDAR and cameras to scan
street scenes. From the acquired trajectory data, we generate highly plausible
traffic flows for cars and pedestrians and compose them into the background.
The composite images can be re-synthesized with different viewpoints and sensor
models. The resulting images are photo-realistic, fully annotated, and ready
for end-to-end training and testing of autonomous driving systems from
perception to planning. We explain our system design and validate our
algorithms with a number of autonomous driving tasks from detection to
segmentation and predictions.
Compared to traditional approaches, our method offers unmatched scalability
and realism. Scalability is particularly important for AD simulation and we
believe the complexity and diversity of the real world cannot be realistically
captured in a virtual environment. Our augmented approach combines the
flexibility in a virtual environment (e.g., vehicle movements) with the
richness of the real world to allow effective simulation of anywhere on earth.
| 1 | 0 | 0 | 0 | 0 | 0 |
Scheme-theoretic Whitney conditions and applications to tangency of projective varieties | We investigate a scheme-theoretic variant of Whitney condition a. If X is a
projec-tive variety over the field of complex numbers and Y $\subset$ X a
subvariety, then X satisfies generically the scheme-theoretic Whitney condition
a along Y provided that the pro-jective dual of X is smooth. We give
applications to tangency of projective varieties over C and to convex real
algebraic geometry. In particular, we prove a Bertini-type theorem for
osculating plane of smooth complex space curves and a generalization of a
Theorem of Ranestad and Sturmfels describing the algebraic boundary of an
affine compact real variety.
| 0 | 0 | 1 | 0 | 0 | 0 |
Role of 1-D finite size Heisenberg chain in increasing metal to insulator transition temperature in hole rich VO2 | VO2 samples are grown with different oxygen concentrations leading to
different monoclinic, M1 and triclinic, T insulating phases which undergo a
first order metal to insulator transition (MIT) followed by a structural phase
transition (SPT) to rutile tetragonal phase. The metal insulator transition
temperature (Tc) was found to be increased with increasing native defects.
Vanadium vacancy (VV) is envisaged to create local strains in the lattice which
prevents twisting of the V-V dimers promoting metastable monoclinic, M2 and T
phases at intermediate temperatures. It is argued that MIT is driven by strong
electronic correlation. The low temperature insulating phase can be considered
as a collection of one-dimensional (1-D) half-filled band, which undergoes Mott
transition to 1-D infinitely long Heisenberg spin 1/2 chains leading to
structural distortion due to spin-phonon coupling. Presence of VV creates
localized holes (d0) in the nearest neighbor, thereby fragmenting the spin 1/2
chains at nanoscale, which in turn increase the Tc value more than that of an
infinitely long one. The Tc value scales inversely with the average size of
fragmented Heisenberg spin 1/2 chains following a critical exponent of 2/3,
which is exactly the same predicted theoretically for Heisenberg spin 1/2 chain
at nanoscale undergoing SPT (spin-Peierls transition). Thus, the observation of
MIT and SPT at the same time in VO2 can be explained from our phenomenological
model of reduced 1-D Heisenberg spin 1/2 chains. The reported increase
(decrease) in Tc value of VO2 by doping with metal having valency less (more)
than four, can also be understood easily with our unified model, for the first
time, considering finite size scaling of Heisenberg chains.
| 0 | 1 | 0 | 0 | 0 | 0 |
Electron paramagnetic resonance g-tensors from state interaction spin-orbit coupling density matrix renormalization group | We present a state interaction spin-orbit coupling method to calculate
electron paramagnetic resonance (EPR) $g$-tensors from density matrix
renormalization group wavefunctions. We apply the technique to compute
$g$-tensors for the \ce{TiF3} and \ce{CuCl4^2-} complexes, a [2Fe-2S] model of
the active center of ferredoxins, and a \ce{Mn4CaO5} model of the S2 state of
the oxygen evolving complex. These calculations raise the prospects of
determining $g$-tensors in multireference calculations with a large number of
open shells.
| 0 | 1 | 0 | 0 | 0 | 0 |
A multilayer multiconfiguration time-dependent Hartree study of the nonequilibrium Anderson impurity model at zero temperature | Quantum transport is studied for the nonequilibrium Anderson impurity model
at zero temperature employing the multilayer multiconfiguration time-dependent
Hartree theory within the second quantization representation (ML-MCTDH-SQR) of
Fock space. To adress both linear and nonlinear conductance in the Kondo
regime, two new techniques of the ML-MCTDH-SQR simulation methodology are
introduced: (i) the use of correlated initial states, which is achieved by
imaginary time propagation of the overall Hamiltonian at zero voltage and (ii)
the adoption of the logarithmic discretization of the electronic continuum.
Employing the improved methodology, the signature of the Kondo effect is
analyzed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Robust Identification of Target Genes and Outliers in Triple-negative Breast Cancer Data | Correct classification of breast cancer sub-types is of high importance as it
directly affects the therapeutic options. We focus on triple-negative breast
cancer (TNBC) which has the worst prognosis among breast cancer types. Using
cutting edge methods from the field of robust statistics, we analyze Breast
Invasive Carcinoma (BRCA) transcriptomic data publicly available from The
Cancer Genome Atlas (TCGA) data portal. Our analysis identifies statistical
outliers that may correspond to misdiagnosed patients. Furthermore, it is
illustrated that classical statistical methods may fail in the presence of
these outliers, prompting the need for robust statistics. Using robust sparse
logistic regression we obtain 36 relevant genes, of which ca. 60\% have been
previously reported as biologically relevant to TNBC, reinforcing the validity
of the method. The remaining 14 genes identified are new potential biomarkers
for TNBC. Out of these, JAM3, SFT2D2 and PAPSS1 were previously associated to
breast tumors or other types of cancer. The relevance of these genes is
confirmed by the new DetectDeviatingCells (DDC) outlier detection technique. A
comparison of gene networks on the selected genes showed significant
differences between TNBC and non-TNBC data. The individual role of FOXA1 in
TNBC and non-TNBC, and the strong FOXA1-AGR2 connection in TNBC stand out. Not
only will our results contribute to the breast cancer/TNBC understanding and
ultimately its management, they also show that robust regression and outlier
detection constitute key strategies to cope with high-dimensional clinical data
such as omics data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Semi-Lagrangian one-step methods for two classes of time-dependent partial differential systems | Semi-Lagrangian methods are numerical methods designed to find approximate
solutions to particular time-dependent partial differential equations (PDEs)
that describe the advection process. We propose semi-Lagrangian one-step
methods for numerically solving initial value problems for two general systems
of partial differential equations. Along the characteristic lines of the PDEs,
we use ordinary differential equation (ODE) numerical methods to solve the
PDEs. The main benefit of our methods is the efficient achievement of high
order local truncation error through the use of Runge-Kutta methods along the
characteristics. In addition, we investigate the numerical analysis of
semi-Lagrangian methods applied to systems of PDEs: stability, convergence, and
maximum error bounds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Learning for Forecasting Stock Returns in the Cross-Section | Many studies have been undertaken by using machine learning techniques,
including neural networks, to predict stock returns. Recently, a method known
as deep learning, which achieves high performance mainly in image recognition
and speech recognition, has attracted attention in the machine learning field.
This paper implements deep learning to predict one-month-ahead stock returns in
the cross-section in the Japanese stock market and investigates the performance
of the method. Our results show that deep neural networks generally outperform
shallow neural networks, and the best networks also outperform representative
machine learning models. These results indicate that deep learning shows
promise as a skillful machine learning method to predict stock returns in the
cross-section.
| 0 | 0 | 0 | 0 | 0 | 1 |
Efficient Nearest-Neighbor Search for Dynamical Systems with Nonholonomic Constraints | Nearest-neighbor search dominates the asymptotic complexity of sampling-based
motion planning algorithms and is often addressed with k-d tree data
structures. While it is generally believed that the expected complexity of
nearest-neighbor queries is $O(log(N))$ in the size of the tree, this paper
reveals that when a classic k-d tree approach is used with sub-Riemannian
metrics, the expected query complexity is in fact $\Theta(N^p \log(N))$ for a
number $p \in [0, 1)$ determined by the degree of nonholonomy of the system.
These metrics arise naturally in nonholonomic mechanical systems, including
classic wheeled robot models. To address this negative result, we propose novel
k-d tree build and query strategies tailored to sub-Riemannian metrics and
demonstrate significant improvements in the running time of nearest-neighbor
search queries.
| 1 | 0 | 0 | 0 | 0 | 0 |
Primitivity, Uniform Minimality and State Complexity of Boolean Operations | A minimal deterministic finite automaton (DFA) is uniformly minimal if it
always remains minimal when the final state set is replaced by a non-empty
proper subset of the state set. We prove that a permutation DFA is uniformly
minimal if and only if its transition monoid is a primitive group. We use this
to study boolean operations on group languages, which are recognized by direct
products of permutation DFAs. A direct product cannot be uniformly minimal,
except in the trivial case where one of the DFAs in the product is a one-state
DFA. However, non-trivial direct products can satisfy a weaker condition we
call uniform boolean minimality, where only final state sets used to recognize
boolean operations are considered. We give sufficient conditions for a direct
product of two DFAs to be uniformly boolean minimal, which in turn gives
sufficient conditions for pairs of group languages to have maximal state
complexity under all binary boolean operations ("maximal boolean complexity").
In the case of permutation DFAs with one final state, we give necessary and
sufficient conditions for pairs of group languages to have maximal boolean
complexity. Our results demonstrate a connection between primitive groups and
automata with strong minimality properties.
| 1 | 0 | 1 | 0 | 0 | 0 |
An Empirical Evaluation of Allgatherv on Multi-GPU Systems | Applications for deep learning and big data analytics have compute and memory
requirements that exceed the limits of a single GPU. However, effectively
scaling out an application to multiple GPUs is challenging due to the
complexities of communication between the GPUs, particularly for collective
communication with irregular message sizes. In this work, we provide a
performance evaluation of the Allgatherv routine on multi-GPU systems, focusing
on GPU network topology and the communication library used. We present results
from the OSU-micro benchmark as well as conduct a case study for sparse tensor
factorization, one application that uses Allgatherv with highly irregular
message sizes. We extend our existing tensor factorization tool to run on
systems with different node counts and varying number of GPUs per node. We then
evaluate the communication performance of our tool when using traditional MPI,
CUDA-aware MVAPICH and NCCL across a suite of real-world data sets on three
different systems: a 16-node cluster with one GPU per node, NVIDIA's DGX-1 with
8 GPUs and Cray's CS-Storm with 16 GPUs. Our results show that irregularity in
the tensor data sets produce trends that contradict those in the OSU
micro-benchmark, as well as trends that are absent from the benchmark.
| 1 | 0 | 0 | 0 | 0 | 0 |
Summary of Topological Study of Chaotic CBC Mode of Operation | In cryptography, block ciphers are the most fundamental elements in many
symmetric-key encryption systems. The Cipher Block Chaining, denoted CBC,
presents one of the most famous mode of operation that uses a block cipher to
provide confidentiality or authenticity. In this research work, we intend to
summarize our results that have been detailed in our previous series of
articles. The goal of this series has been to obtain a complete topological
study of the CBC block cipher mode of operation after proving his chaotic
behavior according to the reputed definition of Devaney.
| 1 | 1 | 0 | 0 | 0 | 0 |
Benchmarking five numerical simulation techniques for computing resonance wavelengths and quality factors in photonic crystal membrane line defect cavities | We present numerical studies of two photonic crystal membrane microcavities,
a short line-defect cavity with relatively low quality ($Q$) factor and a
longer cavity with high $Q$. We use five state-of-the-art numerical simulation
techniques to compute the cavity $Q$ factor and the resonance wavelength
$\lambda$ for the fundamental cavity mode in both structures. For each method,
the relevant computational parameters are systematically varied to estimate the
computational uncertainty. We show that some methods are more suitable than
others for treating these challenging geometries.
| 0 | 1 | 0 | 0 | 0 | 0 |
MSO+nabla is undecidable | This paper is about an extension of monadic second-order logic over infinite
trees, which adds a quantifier that says "the set of branches \pi which satisfy
a formula \phi(\pi) has probability one". This logic was introduced by
Michalewski and Mio; we call it MSO+nabla following Shelah and Lehmann. The
logic MSO+nabla subsumes many qualitative probabilistic formalisms, including
qualitative probabilistic CTL, probabilistic LTL, or parity tree automata with
probabilistic acceptance conditions. We consider the decision problem: decide
if a sentence of MSO+nabla is true in the infinite binary tree? For sentences
from the weak variant of this logic (set quantifiers range only over finite
sets) the problem was known to be decidable, but the question for the full
logic remained open. In this paper we show that the problem for the full logic
MSO+nabla is undecidable.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast Nonconvex Deconvolution of Calcium Imaging Data | Calcium imaging data promises to transform the field of neuroscience by
making it possible to record from large populations of neurons simultaneously.
However, determining the exact moment in time at which a neuron spikes, from a
calcium imaging data set, amounts to a non-trivial deconvolution problem which
is of critical importance for downstream analyses. While a number of
formulations have been proposed for this task in the recent literature, in this
paper we focus on a formulation recently proposed in Jewell and Witten (2017)
which has shown initial promising results. However, this proposal is slow to
run on fluorescence traces of hundreds of thousands of timesteps.
Here we develop a much faster online algorithm for solving the optimization
problem of Jewell and Witten (2017) that can be used to deconvolve a
fluorescence trace of 100,000 timesteps in less than a second. Furthermore,
this algorithm overcomes a technical challenge of Jewell and Witten (2017) by
avoiding the occurrence of so-called "negative" spikes. We demonstrate that
this algorithm has superior performance relative to existing methods for spike
deconvolution on calcium imaging datasets that were recently released as part
of the spikefinder challenge (this http URL).
Our C++ implementation, along with R and python wrappers, is publicly
available on Github at this https URL.
| 0 | 0 | 0 | 1 | 1 | 0 |
Thinking Fast and Slow with Deep Learning and Tree Search | Sequential decision making problems, such as structured prediction, robotic
control, and game playing, require a combination of planning policies and
generalisation of those plans. In this paper, we present Expert Iteration
(ExIt), a novel reinforcement learning algorithm which decomposes the problem
into separate planning and generalisation tasks. Planning new policies is
performed by tree search, while a deep neural network generalises those plans.
Subsequently, tree search is improved by using the neural network policy to
guide search, increasing the strength of new plans. In contrast, standard deep
Reinforcement Learning algorithms rely on a neural network not only to
generalise plans, but to discover them too. We show that ExIt outperforms
REINFORCE for training a neural network to play the board game Hex, and our
final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most
recent Olympiad Champion player to be publicly released.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computer-aided implant design for the restoration of cranial defects | Patient-specific cranial implants are important and necessary in the surgery
of cranial defect restoration. However, traditional methods of manual design of
cranial implants are complicated and time-consuming. Our purpose is to develop
a novel software named EasyCrania to design the cranial implants conveniently
and efficiently. The process can be divided into five steps, which are
mirroring model, clipping surface, surface fitting, the generation of the
initial implant and the generation of the final implant. The main concept of
our method is to use the geometry information of the mirrored model as the base
to generate the final implant. The comparative studies demonstrated that the
EasyCrania can improve the efficiency of cranial implant design significantly.
And, the intra- and inter-rater reliability of the software were stable, which
were 87.07+/-1.6% and 87.73+/-1.4% respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dimension-free Information Concentration via Exp-Concavity | Information concentration of probability measures have important implications
in learning theory. Recently, it is discovered that the information content of
a log-concave distribution concentrates around their differential entropy,
albeit with an unpleasant dependence on the ambient dimension. In this work, we
prove that if the potentials of the log-concave distribution are exp-concave,
which is a central notion for fast rates in online and statistical learning,
then the concentration of information can be further improved to depend only on
the exp-concavity parameter, and hence, it can be dimension independent.
Central to our proof is a novel yet simple application of the variance
Brascamp-Lieb inequality. In the context of learning theory, our
concentration-of-information result immediately implies high-probability
results to many of the previous bounds that only hold in expectation.
| 0 | 0 | 0 | 1 | 0 | 0 |
A review of Dan's reduction method for multiple polylogarithms | In this paper we will give an account of Dan's reduction method for reducing
the weight $ n $ multiple logarithm $ I_{1,1,\ldots,1}(x_1, x_2, \ldots, x_n) $
to an explicit sum of lower depth multiple polylogarithms in $ \leq n - 2 $
variables.
We provide a detailed explanation of the method Dan outlines, and we fill in
the missing proofs for Dan's claims. This establishes the validity of the
method itself, and allows us to produce a corrected version of Dan's reduction
of $ I_{1,1,1,1} $ to $ I_{3,1} $'s and $ I_4 $'s. We then use the symbol of
multiple polylogarithms to answer Dan's question about how this reduction
compares with his earlier reduction of $ I_{1,1,1,1} $, and his question about
the nature of the resulting functional equation of $ I_{3,1} $.
Finally, we apply the method to $ I_{1,1,1,1,1} $ at weight 5 to first
produce a reduction to depth $ \leq 3 $ integrals. Using some functional
equations from our thesis, we further reduce this to $ I_{3,1,1} $, $ I_{3,2} $
and $ I_5 $, modulo products. We also see how to reduce $ I_{3,1,1} $ to $
I_{3,2} $, modulo $ \delta $ (modulo products and depth 1 terms), and indicate
how this allows us to reduce $ I_{1,1,1,1,1} $ to $ I_{3,2} $'s only, modulo $
\delta $.
| 0 | 0 | 1 | 0 | 0 | 0 |
Improved Bayesian Compression | Compression of Neural Networks (NN) has become a highly studied topic in
recent years. The main reason for this is the demand for industrial scale usage
of NNs such as deploying them on mobile devices, storing them efficiently,
transmitting them via band-limited channels and most importantly doing
inference at scale. In this work, we propose to join the Soft-Weight Sharing
and Variational Dropout approaches that show strong results to define a new
state-of-the-art in terms of model compression.
| 0 | 0 | 0 | 1 | 0 | 0 |
Transient Response Improvement for Interconnected Linear Systems: Low-Dimensional Controller Retrofit Approach | In this paper, we propose a method of designing low-dimensional retrofit
controllers for interconnected linear systems. In the proposed method, by
retrofitting an additional low-dimensional controller to a preexisting control
system, we aim at improving transient responses caused by spatially local state
deflections, which can be regarded as a local fault occurring at a specific
subsystem. It is found that a type of state-space expansion, called
hierarchical state-space expansion, is the key to systematically designing a
low-dimensional retrofit controller, whose action is specialized to controlling
the corresponding subsystem. Furthermore, the state-space expansion enables
theoretical clarification of the fact that the performance index of the
transient response control is improved by appropriately tuning the retrofit
controller. The efficiency of the proposed method is shown through a motivating
example of power system control where we clarify the trade-off relation between
the dimension of a retrofit controller and its control performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gradient Sparsification for Communication-Efficient Distributed Optimization | Modern large scale machine learning applications require stochastic
optimization algorithms to be implemented on distributed computational
architectures. A key bottleneck is the communication overhead for exchanging
information such as stochastic gradients among different workers. In this
paper, to reduce the communication cost we propose a convex optimization
formulation to minimize the coding length of stochastic gradients. To solve the
optimal sparsification efficiently, several simple and fast algorithms are
proposed for approximate solution, with theoretical guaranteed for sparseness.
Experiments on $\ell_2$ regularized logistic regression, support vector
machines, and convolutional neural networks validate our sparsification
approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
Interactions mediated by a public good transiently increase cooperativity in growing Pseudomonas putida metapopulations | Bacterial communities have rich social lives. A well-established interaction
involves the exchange of a public good in Pseudomonas populations, where the
iron-scavenging compound pyoverdine, synthesized by some cells, is shared with
the rest. Pyoverdine thus mediates interactions between producers and
non-producers and can constitute a public good. This interaction is often used
to test game theoretical predictions on the "social dilemma" of producers. Such
an approach, however, underestimates the impact of specific properties of the
public good, for example consequences of its accumulation in the environment.
Here, we experimentally quantify costs and benefits of pyoverdine production in
a specific environment, and build a model of population dynamics that
explicitly accounts for the changing significance of accumulating pyoverdine as
chemical mediator of social interactions. The model predicts that, in an
ensemble of growing populations (metapopulation) with different initial
producer fractions (and consequently pyoverdine contents), the global producer
fraction initially increases. Because the benefit of pyoverdine declines at
saturating concentrations, the increase need only be transient. Confirmed by
experiments on metapopulations, our results show how a changing benefit of a
public good can shape social interactions in a bacterial population.
| 0 | 0 | 0 | 0 | 1 | 0 |
Transfer results for Frobenius extensions | We study Frobenius extensions which are free-filtered by a totally ordered,
finitely generated abelian group, and their free-graded counterparts. First we
show that the Frobenius property passes up from a free-graded extension to a
free-filtered extension, then also from a free-filtered extension to the
extension of their Rees algebras. Our main theorem states that, under some
natural hypotheses, a free-filtered extension of algebras is Frobenius if and
only if the associated graded extension is Frobenius. In the final section we
apply this theorem to provide new examples and non-examples of Frobenius
extensions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ergodicity of spherically symmetric fluid flows outside of a Schwarzschild black hole with random boundary forcing | We consider the Burgers equation posed on the outer communication region of a
Schwarzschild black hole spacetime. Assuming spherical symmetry for the fluid
flow under consideration, we study the propagation and interaction of shock
waves under the effect of random forcing. First of all, considering the initial
and boundary value problem with boundary data prescribed in the vicinity of the
horizon, we establish a generalization of the Hopf--Lax--Oleinik formula, which
takes the curved geometry into account and allows us to establish the existence
of bounded variation solutions. To this end, we analyze the global behavior of
the characteristic curves in the Schwarzschild geometry, including their
behavior near the black hole horizon. In a second part, we investigate the
long-term statistical properties of solutions when a random forcing is imposed
near the black hole horizon and study the ergodicity of the fluid flow under
consideration. We prove the existence of a random global attractor and, for the
Burgers equation outside of a Schwarzschild black hole, we are able to validate
the so-called `one-force-one-solution' principle. Furthermore, all of our
results are also established for a pressureless Euler model which consists of
two balance laws and includes a transport equation satisfied by the integrated
fluid density.
| 0 | 0 | 1 | 0 | 0 | 0 |
Recency-weighted Markovian inference | We describe a Markov latent state space (MLSS) model, where the latent state
distribution is a decaying mixture over multiple past states. We present a
simple sampling algorithm that allows to approximate such high-order MLSS with
fixed time and memory costs.
| 1 | 0 | 0 | 1 | 0 | 0 |
A revisit on the compactness of commutators | A new characterization of CMO(R^n) is established by the local mean
oscillation. Some characterizations of iterated compact commutators on weighted
Lebesgue spaces are given, which are new even in the unweighted setting for the
first order commutators.
| 0 | 0 | 1 | 0 | 0 | 0 |
Question Answering through Transfer Learning from Large Fine-grained Supervision Data | We show that the task of question answering (QA) can significantly benefit
from the transfer learning of models trained on a different large, fine-grained
QA dataset. We achieve the state of the art in two well-studied QA datasets,
WikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique
from SQuAD. For WikiQA, our model outperforms the previous best model by more
than 8%. We demonstrate that finer supervision provides better guidance for
learning lexical and syntactic information than coarser supervision, through
quantitative results and visual analysis. We also show that a similar transfer
learning procedure achieves the state of the art on an entailment task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Relative phantom maps | We define a map $f\colon X\to Y$ to be a phantom map relative to a map
$\varphi\colon B\to Y$ if the restriction of $f$ to any finite dimensional
skeleton of $X$ lifts to $B$ through $\varphi$, up to homotopy. There are two
kinds of maps which are obviously relative phantom maps: (1) the composite of a
map $X\to B$ with $\varphi$; (2) a usual phantom map $X\to Y$. A relative
phantom map of type (1) is called trivial, and a relative phantom map out of a
suspension which is a sum of (1) and (2) is called relatively trivial. We study
the (relative) triviality of relative phantom maps from a suspension, and in
particular, we give rational homotopy conditions for the (relative) triviality.
We also give a rational homotopy condition for the triviality of relative
phantom maps from a non-suspension to a finite Postnikov section.
| 0 | 0 | 1 | 0 | 0 | 0 |
Concepts of Architecture, Structure and System | The current ISO standards pertaining to the Concepts of System and
Architecture express succinct definitions of these two key terms that lend
themselves to practical application and can be understood through elementary
mathematical foundations. The current work of the ISO/IEC Working Group 42 is
seeking to refine and elaborate the existing standards. This position paper
revisits the fundamental concepts underlying both of these key terms and offers
an approach to: (i) refine and exemplify the term 'fundamental concepts' in the
current ISO definition of Architecture, (ii) exploit existing standards for the
term 'concept', and (iii) introduce a new concept, Architectural Structure,
that can serve to unify the current terminology at a fundamental level. Precise
elementary examples are used in to conceptualise the approach offered.
| 1 | 0 | 0 | 0 | 0 | 0 |
On mesoprimary decomposition of monoid congruences | We prove two main results concerning mesoprimary decomposition of monoid
congruences, as introduced by Kahle and Miller. First, we identify which
associated prime congruences appear in every mesoprimary decomposition, thereby
completing the theory of mesoprimary decomposition of monoid congruences as a
more faithful analog of primary decomposition. Second, we answer a question
posed by Kahle and Miller by characterizing which finite posets arise as the
set of associated prime congruences of monoid congruences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Assessing the state of e-Readiness for Small and Medium Companies in Mexico: a Proposed Taxonomy and Adoption Model | Emerging economies frequently show a large component of their Gross Domestic
Product to be dependant on the economic activity of small and medium
enterprises. Nevertheless, e-business solutions are more likely designed for
large companies. SMEs seem to follow a classical family-based management, used
to traditional activities, rather than seeking new ways of adding value to
their business strategy. Thus, a large portion of a nations economy may be at
disadvantage for competition. This paper aims at assessing the state of
e-business readiness of Mexican SMEs based on already published e-business
evolution models and by means of a survey research design. Data is being
collected in three cities with differing sizes and infrastructure conditions.
Statistical results are expected to be presented. A second part of this
research aims at applying classical adoption models to suggest potential causal
relationships, as well as more suitable recommendations for development.
| 0 | 0 | 0 | 0 | 0 | 1 |
Densities of Hyperbolic Cusp Invariants | We find that cusp densities of hyperbolic knots in the 3-sphere are dense in
[0,0.6826...] and those of links are dense in [0,0.853...]. We define a new
invariant associated with cusp volume, the cusp crossing density, as the ratio
between the cusp volume and the crossing number of a link, and show that cusp
crossing density for links is bounded above by 3.1263.... Moreover, there is a
sequence of links with cusp crossing density approaching 3. The least upper
bound for cusp crossing density remains an open question. For two-component
hyperbolic links, cusp crossing density is shown to be dense in the interval
[0,1.6923...] and for all hyperbolic links, cusp crossing density is shown to
be dense in [0, 2.120...].
| 0 | 0 | 1 | 0 | 0 | 0 |
Data Augmentation for Robust Keyword Spotting under Playback Interference | Accurate on-device keyword spotting (KWS) with low false accept and false
reject rate is crucial to customer experience for far-field voice control of
conversational agents. It is particularly challenging to maintain low false
reject rate in real world conditions where there is (a) ambient noise from
external sources such as TV, household appliances, or other speech that is not
directed at the device (b) imperfect cancellation of the audio playback from
the device, resulting in residual echo, after being processed by the Acoustic
Echo Cancellation (AEC) system. In this paper, we propose a data augmentation
strategy to improve keyword spotting performance under these challenging
conditions. The training set audio is artificially corrupted by mixing in music
and TV/movie audio, at different signal to interference ratios. Our results
show that we get around 30-45% relative reduction in false reject rates, at a
range of false alarm rates, under audio playback from such devices.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Canonical-based NPN Boolean Matching Algorithm Utilizing Boolean Difference and Cofactor Signature | This paper presents a new compact canonical-based algorithm to solve the
problem of single-output completely specified NPN Boolean matching. We propose
a new signature vector Boolean difference and cofactor (DC) signature vector.
Our algorithm utilizes the Boolean difference, cofactor signature and symmetry
properties to search for canonical transformations. The use of symmetry and
Boolean difference notably reduces the search space and speeds up the Boolean
matching process compared to the algorithm proposed in [1]. We tested our
algorithm on a large number of circuits. The experimental results showed that
the average runtime of our algorithm 37% higher and its average search space
67% smaller compared to [1] when tested on general circuits.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cross ratios on boundaries of symmetric spaces and Euclidean buildings | We generalize the natural cross ratio on the ideal boundary of a rank one
symmetric spaces, or even $\mathrm{CAT}(-1)$ space, to higher rank symmetric
spaces and (non-locally compact) Euclidean buildings - we obtain vector valued
cross ratios defined on simplices of the building at infinity. We show several
properties of those cross ratios; for example that (under some restrictions)
periods of hyperbolic isometries give back the translation vector. In addition,
we show that cross ratio preserving maps on the chamber set are induced by
isometries and vice versa - motivating that the cross ratios bring the geometry
of the symmetric space/Euclidean building to the boundary.
| 0 | 0 | 1 | 0 | 0 | 0 |
Heterogeneous inputs to central pattern generators can shape insect gaits | In our previous work, we studied an interconnected bursting neuron model for
insect locomotion, and its corresponding phase oscillator model, which at high
speed can generate stable tripod gaits with three legs off the ground
simultaneously in swing, and at low speed can generate stable tetrapod gaits
with two legs off the ground simultaneously in swing. However, at low speed
several other stable locomotion patterns, that are not typically observed as
insect gaits, may coexist. In the present paper, by adding heterogeneous
external input to each oscillator, we modify the bursting neuron model so that
its corresponding phase oscillator model produces only one stable gait at each
speed, specifically: a unique stable tetrapod gait at low speed, a unique
stable tripod gait at high speed, and a unique branch of stable transition
gaits connecting them. This suggests that control signals originating in the
brain and central nervous system can modify gait patterns.
| 0 | 0 | 0 | 0 | 1 | 0 |
Link invariants derived from multiplexing of crossings | We introduce the multiplexing of a crossing, replacing a classical crossing
of a virtual link diagram with multiple crossings which is a mixture of
classical and virtual. For integers $m_{i}$ $(i=1,\ldots,n)$ and an ordered
$n$-component virtual link diagram $D$, a new virtual link diagram
$D(m_{1},\ldots,m_{n})$ is obtained from $D$ by the multiplexing of all
crossings. For welded isotopic virtual link diagrams $D$ and $D'$,
$D(m_{1},\ldots,m_{n})$ and $D'(m_{1},\ldots,m_{n})$ are welded isotopic. From
the point of view of classical link theory, it seems very interesting that
$D(m_{1},\ldots,m_{n})$ could not be welded isotopic to a classical link
diagram even if $D$ is a classical one, and new classical link invariants are
expected from known welded link invariants via the multiplexing of crossings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adaptive Algebraic Multiscale Solver for Compressible Flow in Heterogeneous Porous Media | This paper presents the development of an Adaptive Algebraic Multiscale
Solver for Compressible flow (C-AMS) in heterogeneous porous media. Similar to
the recently developed AMS for incompressible (linear) flows [Wang et al., JCP,
2014], C-AMS operates by defining primal and dual-coarse blocks on top of the
fine-scale grid. These coarse grids facilitate the construction of a
conservative (finite volume) coarse-scale system and the computation of local
basis functions, respectively. However, unlike the incompressible (elliptic)
case, the choice of equations to solve for basis functions in compressible
problems is not trivial. Therefore, several basis function formulations
(incompressible and compressible, with and without accumulation) are considered
in order to construct an efficient multiscale prolongation operator. As for the
restriction operator, C-AMS allows for both multiscale finite volume (MSFV) and
finite element (MSFE) methods. Finally, in order to resolve high-frequency
errors, fine-scale (pre- and post-) smoother stages are employed. In order to
reduce computational expense, the C-AMS operators (prolongation, restriction,
and smoothers) are updated adaptively. In addition to this, the linear system
in the Newton-Raphson loop is infrequently updated. Systematic numerical
experiments are performed to determine the effect of the various options,
outlined above, on the C-AMS convergence behaviour. An efficient C-AMS strategy
for heterogeneous 3D compressible problems is developed based on overall CPU
times. Finally, C-AMS is compared against an industrial-grade Algebraic
MultiGrid (AMG) solver. Results of this comparison illustrate that the C-AMS is
quite efficient as a nonlinear solver, even when iterated to machine accuracy.
| 1 | 1 | 0 | 0 | 0 | 0 |
Automatic differentiation of hybrid models Illustrated by Diffedge Graphic Methodology. (Survey) | We investigate the automatic differentiation of hybrid models, viz. models
that may contain delays, logical tests and discontinuities or loops. We
consider differentiation with respect to parameters, initial conditions or the
time. We emphasize the case of a small number of derivations and iterated
differentiations are mostly treated with a foccus on high order iterations of
the same derivation. The models we consider may involve arithmetic operations,
elementary functions, logical tests but also more elaborate components such as
delays, integrators, equations and differential equations solvers. This survey
has no pretention to exhaustivity but tries to fil a gap in the litterature
where each kind of of component may be documented, but seldom their common use.
The general approach is illustrated by computer algebra experiments,
stressing the interest of performing differentiation, whenever possible, on
high level objects, before any translation in Fortran or C code. We include
ordinary differential systems with discontinuity, with a special interest for
those comming from discontinuous Lagrangians.
We conclude with an overview of the graphic methodology developped in the
Diffedge software for Simulink hybrid models. Not all possibilities are
covered, but the methodology can be adapted. The result of automatic
differentiation is a new block diagram and so it can be easily translated to
produce real time embedded programs.
We welcome any comments or suggestions of references that we may have missed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Pinning down the mass of Kepler-10c: the importance of sampling and model comparison | Initial RV characterisation of the enigmatic planet Kepler-10c suggested a
mass of $\sim17$ M$_\oplus$, which was remarkably high for a planet with radius
$2.32$ R$_\oplus$; further observations and subsequent analysis hinted at a
(possibly much) lower mass, but masses derived using RVs from two different
spectrographs (HARPS-N and HIRES) were incompatible at a $3\sigma$-level. We
demonstrate here how such mass discrepancies may readily arise from sub-optimal
sampling and/or neglecting to model even a single coherent signal (stellar,
planetary, or otherwise) that may be present in RVs. We then present a
plausible resolution of the mass discrepancy, and ultimately characterise
Kepler-10c as having mass $7.37_{-1.19}^{+1.32}$ M$_\oplus$, and mean density
$3.14^{+0.63}_{-0.55}$ g cm$^{-3}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Human-Centered Autonomous Vehicle Systems: Principles of Effective Shared Autonomy | Building effective, enjoyable, and safe autonomous vehicles is a lot harder
than has historically been considered. The reason is that, simply put, an
autonomous vehicle must interact with human beings. This interaction is not a
robotics problem nor a machine learning problem nor a psychology problem nor an
economics problem nor a policy problem. It is all of these problems put into
one. It challenges our assumptions about the limitations of human beings at
their worst and the capabilities of artificial intelligence systems at their
best. This work proposes a set of principles for designing and building
autonomous vehicles in a human-centered way that does not run away from the
complexity of human nature but instead embraces it. We describe our development
of the Human-Centered Autonomous Vehicle (HCAV) as an illustrative case study
of implementing these principles in practice.
| 1 | 0 | 0 | 0 | 0 | 0 |
Controlled trapping of single particle states on a periodic substrate by deterministic stubbing | A periodic array of atomic sites, described within a tight binding formalism
is shown to be capable of trapping electronic states as it grows in size and
gets stubbed by an atom or an atomic clusters from a side in a deterministic
way. We prescribe a method based on a real space renormalization group method,
that unravels a subtle correlation between the positions of the side coupled
atoms and the energy eigenvalues for which the incoming particle finally gets
trapped. We discuss how, in such conditions, the periodic backbone gets
transformed into an array of infinite quantum wells in the thermodynamic limit.
We present a case here, where the wells have a hierarchically distribution of
widths, hosing standing wave solutions in the thermodynamic limit.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller | This paper proposes a deep cerebellar model articulation controller (DCMAC)
for adaptive noise cancellation (ANC). We expand upon the conventional CMAC by
stacking sin-gle-layer CMAC models into multiple layers to form a DCMAC model
and derive a modified backpropagation training algorithm to learn the DCMAC
parameters. Com-pared with conventional CMAC, the DCMAC can characterize
nonlinear transformations more effectively because of its deep structure.
Experimental results confirm that the pro-posed DCMAC model outperforms the
CMAC in terms of residual noise in an ANC task, showing that DCMAC provides
enhanced modeling capability based on channel characteristics.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Relaxation-based Network Decomposition Algorithm for Parallel Transient Stability Simulation with Improved Convergence | Transient stability simulation of a large-scale and interconnected electric
power system involves solving a large set of differential algebraic equations
(DAEs) at every simulation time-step. With the ever-growing size and complexity
of power grids, dynamic simulation becomes more time-consuming and
computationally difficult using conventional sequential simulation techniques.
To cope with this challenge, this paper aims to develop a fully distributed
approach intended for implementation on High Performance Computer (HPC)
clusters. A novel, relaxation-based domain decomposition algorithm known as
Parallel-General-Norton with Multiple-port Equivalent (PGNME) is proposed as
the core technique of a two-stage decomposition approach to divide the overall
dynamic simulation problem into a set of subproblems that can be solved
concurrently to exploit parallelism and scalability. While the convergence
property has traditionally been a concern for relaxation-based decomposition,
an estimation mechanism based on multiple-port network equivalent is adopted as
the preconditioner to enhance the convergence of the proposed algorithm. The
proposed algorithm is illustrated using rigorous mathematics and validated both
in terms of speed-up and capability. Moreover, a complexity analysis is
performed to support the observation that PGNME scales well when the size of
the subproblems are sufficiently large.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Quantitative Analysis of WCAG 2.0 Compliance For Some Indian Web Portals | Web portals have served as an excellent medium to facilitate user centric
services for organizations irrespective of the type, size, and domain of
operation. The objective of these portals has been to deliver a plethora of
services such as information dissemination, transactional services, and
customer feedback. Therefore, the design of a web portal is crucial in order
that it is accessible to a wide range of user community irrespective of age
group, physical abilities, and level of literacy. In this paper, we have
studied the compliance of WCAG 2.0 by three different categories of Indian web
sites which are most frequently accessed by a large section of user community.
We have provided a quantitative evaluation of different aspects of
accessibility which we believe can pave the way for better design of web sites
by taking care of the deficiencies inherent in the web portals.
| 1 | 0 | 0 | 0 | 0 | 0 |
Portfolio Construction Matters | The role of portfolio construction in the implementation of equity market
neutral factors is often underestimated. Taking the classical momentum strategy
as an example, we show that one can significantly improve the main strategy's
features by properly taking care of this key step. More precisely, an optimized
portfolio construction algorithm allows one to significantly improve the Sharpe
Ratio, reduce sector exposures and volatility fluctuations, and mitigate the
strategy's skewness and tail correlation with the market. These results are
supported by long-term, world-wide simulations and will be shown to be
universal. Our findings are quite general and hold true for a number of other
"equity factors". Finally, we discuss the details of a more realistic set-up
where we also deal with transaction costs.
| 0 | 0 | 0 | 0 | 0 | 1 |
Function space analysis of deep learning representation layers | In this paper we propose a function space approach to Representation Learning
and the analysis of the representation layers in deep learning architectures.
We show how to compute a weak-type Besov smoothness index that quantifies the
geometry of the clustering in the feature space. This approach was already
applied successfully to improve the performance of machine learning algorithms
such as the Random Forest and tree-based Gradient Boosting. Our experiments
demonstrate that in well-known and well-performing trained networks, the Besov
smoothness of the training set, measured in the corresponding hidden layer
feature map representation, increases from layer to layer. We also contribute
to the understanding of generalization by showing how the Besov smoothness of
the representations, decreases as we add more mis-labeling to the training
data. We hope this approach will contribute to the de-mystification of some
aspects of deep learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Random group cobordisms of rank 7/4 | We construct a model of random groups of rank 7/4, and show that in this
model the random group has the exponential mesoscopic rank property.
| 0 | 0 | 1 | 0 | 0 | 0 |
Transport properties across the many-body localization transition in quasiperiodic and random systems | We theoretically study transport properties in one-dimensional interacting
quasiperiodic systems at infinite temperature. We compare and contrast the
dynamical transport properties across the many-body localization (MBL)
transition in quasiperiodic and random models. Using exact diagonalization we
compute the optical conductivity $\sigma(\omega)$ and the return probability
$R(\tau)$ and study their average low-frequency and long-time power-law
behavior, respectively. We show that the low-energy transport dynamics is
markedly distinct in both the thermal and MBL phases in quasiperiodic and
random models and find that the diffusive and MBL regimes of the quasiperiodic
model are more robust than those in the random system. Using the distribution
of the DC conductivity, we quantify the contribution of sample-to-sample and
state-to-state fluctuations of $\sigma(\omega)$ across the MBL transition. We
find that the activated dynamical scaling ansatz works poorly in the
quasiperiodic model but holds in the random model with an estimated activation
exponent $\psi\approx 0.9$. We argue that near the MBL transition in
quasiperiodic systems, critical eigenstates give rise to a subdiffusive
crossover regime on finite-size systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analytic solution and stationary phase approximation for the Bayesian lasso and elastic net | The lasso and elastic net linear regression models impose a
double-exponential prior distribution on the model parameters to achieve
regression shrinkage and variable selection, allowing the inference of robust
models from large data sets. However, there has been limited success in
deriving estimates for the full posterior distribution of regression
coefficients in these models, due to a need to evaluate analytically
intractable partition function integrals. Here, the Fourier transform is used
to express these integrals as complex-valued oscillatory integrals over
"regression frequencies". This results in an analytic expansion and stationary
phase approximation for the partition functions of the Bayesian lasso and
elastic net, where the non-differentiability of the double-exponential prior
has so far eluded such an approach. Use of this approximation leads to highly
accurate numerical estimates for the expectation values and marginal posterior
distributions of the regression coefficients, and allows for Bayesian inference
of much higher dimensional models than previously possible.
| 0 | 0 | 1 | 1 | 0 | 0 |
(Un)predictability of strong El Niño events | The El Niño-Southern Oscillation (ENSO) is a mode of interannual
variability in the coupled equatorial Pacific coupled atmosphere/ocean system.
El Niño describes a state in which sea surface temperatures in the eastern
Pacific increase and upwelling of colder, deep waters diminishes. El Niño
events typically peak in boreal winter, but their strength varies irregularly
on decadal time scales. There were exceptionally strong El Niño events in
1982-83, 1997-98 and 2015-16 that affected weather on a global scale. Widely
publicized forecasts in 2014 predicted that the 2015-16 event would occur a
year earlier. Predicting the strength of El Niño is a matter of practical
concern due to its effects on hydroclimate and agriculture around the world.
This paper discusses the frequency and regularity of strong El Niño events in
the context of chaotic dynamical systems. We discover a mechanism that limits
their predictability in a conceptual "recharge oscillator" model of ENSO. Weak
seasonal forcing or noise in this model can induce irregular switching between
an oscillatory state that has strong El Niño events and a chaotic state that
lacks strong events, In this regime, the timing of strong El Niño events on
decadal time scales is unpredictable.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Learning Assisted Heuristic Tree Search for the Container Pre-marshalling Problem | One of the key challenges for operations researchers solving real-world
problems is designing and implementing high-quality heuristics to guide their
search procedures. In the past, machine learning techniques have failed to play
a major role in operations research approaches, especially in terms of guiding
branching and pruning decisions. We integrate deep neural networks into a
heuristic tree search procedure to decide which branch to choose next and to
estimate a bound for pruning the search tree of an optimization problem. We
call our approach Deep Learning assisted heuristic Tree Search (DLTS) and apply
it to a well-known problem from the container terminals literature, the
container pre-marshalling problem (CPMP). Our approach is able to learn
heuristics customized to the CPMP solely through analyzing the solutions to
CPMP instances, and applies this knowledge within a heuristic tree search to
produce the highest quality heuristic solutions to the CPMP to date.
| 1 | 0 | 0 | 0 | 0 | 0 |
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates | The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multiplicative local linear hazard estimation and best one-sided cross-validation | This paper develops detailed mathematical statistical theory of a new class
of cross-validation techniques of local linear kernel hazards and their
multiplicative bias corrections. The new class of cross-validation combines
principles of local information and recent advances in indirect
cross-validation. A few applications of cross-validating multiplicative kernel
hazard estimation do exist in the literature. However, detailed mathematical
statistical theory and small sample performance are introduced via this paper
and further upgraded to our new class of best one-sided cross-validation. Best
one-sided cross-validation turns out to have excellent performance in its
practical illustrations, in its small sample performance and in its
mathematical statistical theoretical performance.
| 0 | 0 | 0 | 1 | 0 | 0 |
Random Perturbations of Matrix Polynomials | A sum of a large-dimensional random matrix polynomial and a fixed low-rank
matrix polynomial is considered. The main assumption is that the resolvent of
the random polynomial converges to some deterministic limit. A formula for the
limit of the resolvent of the sum is derived and the eigenvalues are localised.
Three instances are considered: a low-rank matrix perturbed by the Wigner
matrix, a product $HX$ of a fixed diagonal matrix $H$ and the Wigner matrix $X$
and a special matrix polynomial. The results are illustrated with various
examples and numerical simulations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Monitoring Information Quality within Web Service Composition and Execution | The composition of web services is a promising approach enabling flexible and
loose integration of business applications. Numerous approaches related to web
services composition have been developed usually following three main phases:
the service discovery is based on the semantic description of advertised
services, i.e. the functionality of the service, meanwhile the service
selection is based on non- functional quality dimensions of service, and
finally the service composition aims to support an underlying process. Most of
those approaches explore techniques of static or dynamic design for an optimal
service composition. One important aspect so far is mostly neglected, focusing
on the output produced of composite web services. In this paper, in contrast to
many prominent approaches we introduce a data quality perspective on web
services. Based on a data quality management approach, we propose a framework
for analyzing data produced by the composite service execution. Utilising
process information together with data in service logs, our approach allows
identifying problems in service composition and execution. Analyzing the
service execution history our approach helps to improve common approaches of
service selection and composition.
| 1 | 0 | 0 | 0 | 0 | 0 |
An open-source platform to study uniaxial stress effects on nanoscale devices | We present an automatic measurement platform that enables the
characterization of nanodevices by electrical transport and optical
spectroscopy as a function of uniaxial stress. We provide insights into and
detailed descriptions of the mechanical device, the substrate design and
fabrication, and the instrument control software, which is provided under
open-source license. The capability of the platform is demonstrated by
characterizing the piezo-resistance of an InAs nanowire device using a
combination of electrical transport and Raman spectroscopy. The advantages of
this measurement platform are highlighted by comparison with state-of-the-art
piezo-resistance measurements in InAs nanowires. We envision that the
systematic application of this methodology will provide new insights into the
physics of nanoscale devices and novel materials for electronics, and thus
contribute to the assessment of the potential of strain as a technology booster
for nanoscale electronics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonlinear probability. A theory with incompatible stochastic variables | In 1991 J.F. Aarnes introduced the concept of quasi-measures in a compact
topological space $\Omega$ and established the connection between quasi-states
on $C (\Omega)$ and quasi-measures in $\Omega$. This work solved the linearity
problem of quasi-states on $C^*$-algebras formulated by R.V. Kadison in 1965.
The answer is that a quasi-state need not be linear, so a quasi-state need not
be a state. We introduce nonlinear measures in a space $\Omega$ which is a
generalization of a measurable space. In this more general setting we are still
able to define integration and establish a representation theorem for the
corresponding functionals. A probabilistic language is choosen since we feel
that the subject should be of some interest to probabilists. In particular we
point out that the theory allows for incompatible stochastic variables. The
need for incompatible variables is well known in quantum mechanics, but the
need seems natural also in other contexts as we try to explain by a questionary
example.
Keywords and phrases: Epistemic probability, Integration with respect to mea-
sures and other set functions, Banach algebras of continuous functions, Set
func- tions and measures on topological spaces, States, Logical foundations of
quantum mechanics.
| 0 | 0 | 1 | 1 | 0 | 0 |
Parametrised second-order complexity theory with applications to the study of interval computation | We extend the framework for complexity of operators in analysis devised by
Kawamura and Cook (2012) to allow for the treatment of a wider class of
representations. The main novelty is to endow represented spaces of interest
with an additional function on names, called a parameter, which measures the
complexity of a given name. This parameter generalises the size function which
is usually used in second-order complexity theory and therefore also central to
the framework of Kawamura and Cook. The complexity of an algorithm is measured
in terms of its running time as a second-order function in the parameter, as
well as in terms of how much it increases the complexity of a given name, as
measured by the parameters on the input and output side.
As an application we develop a rigorous computational complexity theory for
interval computation. In the framework of Kawamura and Cook the representation
of real numbers based on nested interval enclosures does not yield a reasonable
complexity theory. In our new framework this representation is polytime
equivalent to the usual Cauchy representation based on dyadic rational
approximation. By contrast, the representation of continuous real functions
based on interval enclosures is strictly smaller in the polytime reducibility
lattice than the usual representation, which encodes a modulus of continuity.
Furthermore, the function space representation based on interval enclosures is
optimal in the sense that it contains the minimal amount of information amongst
those representations which render evaluation polytime computable.
| 1 | 0 | 0 | 0 | 0 | 0 |
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning | Deep generative models provide powerful tools for distributions over
complicated manifolds, such as those of natural images. But many of these
methods, including generative adversarial networks (GANs), can be difficult to
train, in part because they are prone to mode collapse, which means that they
characterize only a few modes of the true distribution. To address this, we
introduce VEEGAN, which features a reconstructor network, reversing the action
of the generator by mapping from data to noise. Our training objective retains
the original asymptotic consistency guarantee of GANs, and can be interpreted
as a novel autoencoder loss over the noise. In sharp contrast to a traditional
autoencoder over data points, VEEGAN does not require specifying a loss
function over the data, but rather only over the representations, which are
standard normal by assumption. On an extensive set of synthetic and real world
image datasets, VEEGAN indeed resists mode collapsing to a far greater extent
than other recent GAN variants, and produces more realistic samples.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.