title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Quandle rings | In this paper, a theory of quandle rings is proposed for quandles analogous
to the classical theory of group rings for groups, and interconnections between
quandles and associated quandle rings are explored.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the application of Laguerre's method to the polynomial eigenvalue problem | The polynomial eigenvalue problem arises in many applications and has
received a great deal of attention over the last decade. The use of
root-finding methods to solve the polynomial eigenvalue problem dates back to
the work of Kublanovskaya (1969, 1970) and has received a resurgence due to the
work of Bini and Noferini (2013). In this paper, we present a method which uses
Laguerre iteration for computing the eigenvalues of a matrix polynomial. An
effective method based on the numerical range is presented for computing
initial estimates to the eigenvalues of a matrix polynomial. A detailed
explanation of the stopping criteria is given, and it is shown that under
suitable conditions we can guarantee the backward stability of the eigenvalues
computed by our method. Then, robust methods are provided for computing both
the right and left eigenvectors and the condition number of each eigenpair.
Applications for Hessenberg and tridiagonal matrix polynomials are given and we
show that both structures benefit from substantial computational savings.
Finally, we present several numerical experiments to verify the accuracy of our
method and its competitiveness for solving the roots of a polynomial and the
tridiagonal eigenvalue problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Limiting Behaviour of the Teichmüller Harmonic Map Flow | In this paper we study the Teichmüller harmonic map flow as introduced by
Rupflin and Topping [15]. It evolves pairs of maps and metrics $(u,g)$ into
branched minimal immersions, or equivalently into weakly conformal harmonic
maps, where $u$ maps from a fixed closed surface $M$ with metric $g$ to a
general target manifold $N$. It arises naturally as a gradient flow for the
Dirichlet energy functional viewed as acting on equivalence classes of such
pairs, obtained from the invariance under diffeomorphisms and conformal changes
of the domain metric.
In the construction of a suitable inner product for the gradient flow a
choice of relative weight of the map tangent directions and metric tangent
directions is made, which manifests itself in the appearance of a coupling
constant $\eta$ in the flow equations.
We study limits of the flow as $\eta$ approaches 0, corresponding to slowing
down the evolution of the metric.
We first show that given a smooth harmonic map flow on a fixed time interval,
the Teichmüller harmonic map flows starting at the same initial data converge
uniformly to the underlying harmonic map flow when $\eta \downarrow 0$.
Next we consider a rescaling of time, which increases the speed of the map
evolution while evolving the metric at a constant rate. We show that under
appropriate topological assumptions, in the limit the rescaled flows converge
to a unique flow through harmonic maps with the metric evolving in the
direction of the real part of the Hopf differential.
| 0 | 0 | 1 | 0 | 0 | 0 |
Invariant measures for the actions of the modular group | In this note, we give a nature action of the modular group on the ends of the
infinite (p + 1)-cayley tree, for each prime p. We show that there is a unique
invariant probability measure for each p.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cartesian Fibrations and Representability | In higher category theory, we use fibrations to model presheaves. In this
paper we introduce a new method to build such fibrations. Concretely, for
suitable reflective subcategories of simplicial spaces, we build fibrations
that model presheaves valued in that subcategory. Using this we can build
Cartesian fibrations, but we can also model presheaves valued in Segal spaces.
Additionally, using this new approach, we define representable Cartesian
fibrations, generalizing representable presheaves valued in spaces, and show
they have similar properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Signal coupling to embedded pitch adapters in silicon sensors | We have examined the effects of embedded pitch adapters on signal formation
in n-substrate silicon microstrip sensors with data from beam tests and
simulation. According to simulation, the presence of the pitch adapter metal
layer changes the electric field inside the sensor, resulting in slowed signal
formation on the nearby strips and a pick-up effect on the pitch adapter. This
can result in an inefficiency to detect particles passing through the pitch
adapter region. All these effects have been observed in the beam test data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Winds and radiation in unison: a new semi-analytic feedback model for cloud dissolution | Star clusters interact with the interstellar medium (ISM) in various ways,
most importantly in the destruction of molecular star-forming clouds, resulting
in inefficient star formation on galactic scales. On cloud scales, ionizing
radiation creates \hii regions, while stellar winds and supernovae drive the
ISM into thin shells. These shells are accelerated by the combined effect of
winds, radiation pressure and supernova explosions, and slowed down by gravity.
Since radiative and mechanical feedback is highly interconnected, they must be
taken into account in a self-consistent and combined manner, including the
coupling of radiation and matter. We present a new semi-analytic
one-dimensional feedback model for isolated massive clouds ($\geq
10^5\,M_{\odot}$) to calculate shell dynamics and shell structure
simultaneously. It allows us to scan a large range of physical parameters (gas
density, star formation efficiency, metallicity) and to estimate escape
fractions of ionizing radiation $f_{\rm{esc,i}}$, the minimum star formation
efficiency $\epsilon_{\rm{min}}$ required to drive an outflow, and recollapse
time scales for clouds that are not destroyed by feedback. Our results show
that there is no simple answer to the question of what dominates cloud
dynamics, and that each feedback process significantly influences the
efficiency of the others. We find that variations in natal cloud density can
very easily explain differences between dense-bound and diffuse-open star
clusters. We also predict, as a consequence of feedback, a $4-6$ Myr age
difference for massive clusters with multiple generations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analysis-of-marginal-Tail-Means (ATM): a robust method for discrete black-box optimization | We present a new method, called Analysis-of-marginal-Tail-Means (ATM), for
effective robust optimization of discrete black-box problems. ATM has important
applications to many real-world engineering problems (e.g., manufacturing
optimization, product design, molecular engineering), where the objective to
optimize is black-box and expensive, and the design space is inherently
discrete. One weakness of existing methods is that they are not robust: these
methods perform well under certain assumptions, but yield poor results when
such assumptions (which are difficult to verify in black-box problems) are
violated. ATM addresses this via the use of marginal tail means for
optimization, which combines both rank-based and model-based methods. The
trade-off between rank- and model-based optimization is tuned by first
identifying important main effects and interactions, then finding a good
compromise which best exploits additive structure. By adaptively tuning this
trade-off from data, ATM provides improved robust optimization over existing
methods, particularly in problems with (i) a large number of factors, (ii)
unordered factors, or (iii) experimental noise. We demonstrate the
effectiveness of ATM in simulations and in two real-world engineering problems:
the first on robust parameter design of a circular piston, and the second on
product family design of a thermistor network.
| 0 | 0 | 0 | 1 | 0 | 0 |
Modular System for Shelves and Coasts (MOSSCO v1.0) - a flexible and multi-component framework for coupled coastal ocean ecosystem modelling | Shelf and coastal sea processes extend from the atmosphere through the water
column and into the sea bed. These processes are driven by physical, chemical,
and biological interactions at local scales, and they are influenced by
transport and cross strong spatial gradients. The linkages between domains and
many different processes are not adequately described in current model systems.
Their limited integration level in part reflects lacking modularity and
flexibility; this shortcoming hinders the exchange of data and model components
and has historically imposed supremacy of specific physical driver models. We
here present the Modular System for Shelves and Coasts (MOSSCO,
this http URL), a novel domain and process coupling system
tailored---but not limited--- to the coupling challenges of and applications in
the coastal ocean. MOSSCO builds on the existing coupling technology Earth
System Modeling Framework and on the Framework for Aquatic Biogeochemical
Models, thereby creating a unique level of modularity in both domain and
process coupling; the new framework adds rich metadata, flexible scheduling,
configurations that allow several tens of models to be coupled, and tested
setups for coastal coupled applications. That way, MOSSCO addresses the
technology needs of a growing marine coastal Earth System community that
encompasses very different disciplines, numerical tools, and research
questions.
| 0 | 1 | 0 | 0 | 0 | 0 |
How big was Galileo's impact? Percussion in the Sixth Day of the "Two New Sciences" | The Giornata Sesta about the Force of Percussion is a relatively less known
Chapter from the Galileo's masterpiece "Discourse about Two New Sciences". It
was first published lately (1718), long after the first edition of the Two New
Sciences (1638) and Galileo's death (1642). The Giornata Sesta focuses on how
to quantify the percussion force caused by a body in movement, and describes a
very interesting experiment known as "the two-bucket experiment". In this
paper, we review this experiment reported by Galileo, develop a steady-state
theoretical model, and solve its transient form numerically; additionally, we
report the results from one real simplified analogous experiment. Finally, we
discuss the conclusions drawn by Galileo -- correct, despite a probably
unnoticeable imbalance --, showing that he did not report the thrust force
component in his setup -- which would be fundamental for the correct
calculation of the percussion force.
| 0 | 1 | 0 | 0 | 0 | 0 |
Radar, without tears | A brief introduction to radar: principles, Doppler effect, antennas,
waveforms, power budget - and future radars. [13 pages]
| 0 | 1 | 0 | 0 | 0 | 0 |
A Multi-Stage Algorithm for Acoustic Physical Model Parameters Estimation | One of the challenges in computational acoustics is the identification of
models that can simulate and predict the physical behavior of a system
generating an acoustic signal. Whenever such models are used for commercial
applications an additional constraint is the time-to-market, making automation
of the sound design process desirable. In previous works, a computational sound
design approach has been proposed for the parameter estimation problem
involving timbre matching by deep learning, which was applied to the synthesis
of pipe organ tones. In this work we refine previous results by introducing the
former approach in a multi-stage algorithm that also adds heuristics and a
stochastic optimization method operating on objective cost functions based on
psychoacoustics. The optimization method shows to be able to refine the first
estimate given by the deep learning approach and substantially improve the
objective metrics, with the additional benefit of reducing the sound design
process time. Subjective listening tests are also conducted to gather
additional insights on the results.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Warped Product Splitting Theorem Through Weak KAM Theory | In this paper, we strengthen the splitting theorem proved in [14, 15] and
provide a different approach using ideas from the weak KAM theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Instance Segmentation by Interaction | We present an approach for building an active agent that learns to segment
its visual observations into individual objects by interacting with its
environment in a completely self-supervised manner. The agent uses its current
segmentation model to infer pixels that constitute objects and refines the
segmentation model by interacting with these pixels. The model learned from
over 50K interactions generalizes to novel objects and backgrounds. To deal
with noisy training signal for segmenting objects obtained by self-supervised
interactions, we propose robust set loss. A dataset of robot's interactions
along-with a few human labeled examples is provided as a benchmark for future
research. We test the utility of the learned segmentation model by providing
results on a downstream vision-based control task of rearranging multiple
objects into target configurations from visual inputs alone. Videos, code, and
robotic interaction dataset are available at
this https URL
| 1 | 0 | 0 | 1 | 0 | 0 |
On convergence rate of stochastic proximal point algorithm without strong convexity, smoothness or bounded gradients | Significant parts of the recent learning literature on stochastic
optimization algorithms focused on the theoretical and practical behaviour of
stochastic first order schemes under different convexity properties. Due to its
simplicity, the traditional method of choice for most supervised machine
learning problems is the stochastic gradient descent (SGD) method. Many
iteration improvements and accelerations have been added to the pure SGD in
order to boost its convergence in various (strong) convexity setting. However,
the Lipschitz gradient continuity or bounded gradients assumptions are an
essential requirement for most existing stochastic first-order schemes. In this
paper novel convergence results are presented for the stochastic proximal point
algorithm in different settings. In particular, without any strong convexity,
smoothness or bounded gradients assumptions, we show that a slightly modified
quadratic growth assumption is sufficient to guarantee for the stochastic
proximal point $\mathcal{O}\left(\frac{1}{k}\right)$ convergence rate, in terms
of the distance to the optimal set. Furthermore, linear convergence is obtained
for interpolation setting, when the optimal set of expected cost is included in
the optimal sets of each functional component.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning Texture Manifolds with the Periodic Spatial GAN | This paper introduces a novel approach to texture synthesis based on
generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the
structure of the input noise distribution by constructing tensors with
different types of dimensions. We call this technique Periodic Spatial GAN
(PSGAN). The PSGAN has several novel abilities which surpass the current state
of the art in texture synthesis. First, we can learn multiple textures from
datasets of one or more complex large images. Second, we show that the image
generation with PSGANs has properties of a texture manifold: we can smoothly
interpolate between samples in the structured noise space and generate novel
samples, which lie perceptually between the textures of the original dataset.
In addition, we can also accurately learn periodical textures. We make multiple
experiments which show that PSGANs can flexibly handle diverse texture and
image data sources. Our method is highly scalable and it can generate output
images of arbitrary large size.
| 1 | 0 | 0 | 1 | 0 | 0 |
Generating Representative Executions [Extended Abstract] | Analyzing the behaviour of a concurrent program is made difficult by the
number of possible executions. This problem can be alleviated by applying the
theory of Mazurkiewicz traces to focus only on the canonical representatives of
the equivalence classes of the possible executions of the program. This paper
presents a generic framework that allows to specify the possible behaviours of
the execution environment, and generate all Foata-normal executions of a
program, for that environment, by discarding abnormal executions during the
generation phase. The key ingredient of Mazurkiewicz trace theory, the
dependency relation, is used in the framework in two roles: first, as part of
the specification of which executions are allowed at all, and then as part of
the normality checking algorithm, which is used to discard the abnormal
executions. The framework is instantiated to the relaxed memory models of the
SPARC hierarchy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Extended periodic links and HOMFLYPT polynomial | Extended strongly periodic links have been introduced by Przytycki and
Sokolov as a symmetric surgery presentation of three-manifolds on which the
finite cyclic group acts without fixed points. The purpose of this paper is to
prove that the symmetry of these links is reflected by the first coefficients
of the HOMFLYPT polynomial.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exploring Cross-Domain Data Dependencies for Smart Homes to Improve Energy Efficiency | Over the past decade, the idea of smart homes has been conceived as a
potential solution to counter energy crises or to at least mitigate its
intensive destructive consequences in the residential building sector.
| 1 | 0 | 0 | 0 | 0 | 0 |
Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters | Deep learning models can take weeks to train on a single GPU-equipped
machine, necessitating scaling out DL training to a GPU-cluster. However,
current distributed DL implementations can scale poorly due to substantial
parameter synchronization over the network, because the high throughput of GPUs
allows more data batches to be processed per unit time than CPUs, leading to
more frequent network synchronization. We present Poseidon, an efficient
communication architecture for distributed DL on GPUs. Poseidon exploits the
layered model structures in DL programs to overlap communication and
computation, reducing bursty network communication. Moreover, Poseidon uses a
hybrid communication scheme that optimizes the number of bytes required to
synchronize each layer, according to layer properties and the number of
machines. We show that Poseidon is applicable to different DL frameworks by
plugging Poseidon into Caffe and TensorFlow. We show that Poseidon enables
Caffe and TensorFlow to achieve 15.5x speed-up on 16 single-GPU machines, even
with limited bandwidth (10GbE) and the challenging VGG19-22K network for image
classification. Moreover, Poseidon-enabled TensorFlow achieves 31.5x speed-up
with 32 single-GPU machines on Inception-V3, a 50% improvement over the
open-source TensorFlow (20x speed-up).
| 1 | 0 | 0 | 1 | 0 | 0 |
A closed formula for illiquid corporate bonds and an application to the European market | We deduce a simple closed formula for illiquid corporate coupon bond prices
when liquid bonds with similar characteristics (e.g. maturity) are present in
the market for the same issuer. The key model parameter is the
time-to-liquidate a position, i.e. the time that an experienced bond trader
takes to liquidate a given position on a corporate coupon bond.
The option approach we propose for pricing bonds' illiquidity is reminiscent
of the celebrated work of Longstaff (1995) on the non-marketability of some
non-dividend-paying shares in IPOs. This approach describes a quite common
situation in the fixed income market: it is rather usual to find issuers that,
besides liquid benchmark bonds, issue some other bonds that either are placed
to a small number of investors in private placements or have a limited issue
size.
The model considers interest rate and credit spread term structures and their
dynamics. We show that illiquid bonds present an additional liquidity spread
that depends on the time-to-liquidate aside from credit and interest rate
parameters. We provide a detailed application for two issuers in the European
market.
| 0 | 0 | 0 | 0 | 0 | 1 |
Toric Codes, Multiplicative Structure and Decoding | Long linear codes constructed from toric varieties over finite fields, their
multiplicative structure and decoding. The main theme is the inherent
multiplicative structure on toric codes. The multiplicative structure allows
for \emph{decoding}, resembling the decoding of Reed-Solomon codes and aligns
with decoding by error correcting pairs. We have used the multiplicative
structure on toric codes to construct linear secret sharing schemes with
\emph{strong multiplication} via Massey's construction generalizing the Shamir
Linear secret sharing shemes constructed from Reed-Solomon codes. We have
constructed quantum error correcting codes from toric surfaces by the
Calderbank-Shor-Steane method.
| 1 | 0 | 1 | 0 | 0 | 0 |
Outlier Detection by Consistent Data Selection Method | Often the challenge associated with tasks like fraud and spam detection[1] is
the lack of all likely patterns needed to train suitable supervised learning
models. In order to overcome this limitation, such tasks are attempted as
outlier or anomaly detection tasks. We also hypothesize that out- liers have
behavioral patterns that change over time. Limited data and continuously
changing patterns makes learning significantly difficult. In this work we are
proposing an approach that detects outliers in large data sets by relying on
data points that are consistent. The primary contribution of this work is that
it will quickly help retrieve samples for both consistent and non-outlier data
sets and is also mindful of new outlier patterns. No prior knowledge of each
set is required to extract the samples. The method consists of two phases, in
the first phase, consistent data points (non- outliers) are retrieved by an
ensemble method of unsupervised clustering techniques and in the second phase a
one class classifier trained on the consistent data point set is ap- plied on
the remaining sample set to identify the outliers. The approach is tested on
three publicly available data sets and the performance scores are competitive.
| 1 | 0 | 0 | 1 | 0 | 0 |
Basic quantizations of $D=4$ Euclidean, Lorentz, Kleinian and quaternionic $\mathfrak{o}^{\star}(4)$ symmetries | We construct firstly the complete list of five quantum deformations of $D=4$
complex homogeneous orthogonal Lie algebra $\mathfrak{o}(4;\mathbb{C})\cong
\mathfrak{o}(3;\mathbb{C})\oplus \mathfrak{o}(3;\mathbb{C})$, describing
quantum rotational symmetry of four-dimensional complex space-time, in
particular we provide the corresponding universal quantum $R$-matrices. Further
applying four possible reality conditions we obtain all sixteen Hopf-algebraic
quantum deformations for the real forms of $\mathfrak{o}(4;\mathbb{C})$:
Euclidean $\mathfrak{o}(4)$, Lorentz $\mathfrak{o}(3,1)$, Kleinian
$\mathfrak{o}(2,2)$ and quaternionic $\mathfrak{o}^{\star}(4)$. For
$\mathfrak{o}(3,1)$ we only recall well-known results obtained previously by
the authors, but for other real Lie algebras (Euclidean, Kleinian,
quaternionic) as well as for the complex Lie algebra
$\mathfrak{o}(4;\mathbb{C})$ we present new results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Supervising Unsupervised Learning with Evolutionary Algorithm in Deep Neural Network | A method to control results of gradient descent unsupervised learning in a
deep neural network by using evolutionary algorithm is proposed. To process
crossover of unsupervisedly trained models, the algorithm evaluates pointwise
fitness of individual nodes in neural network. Labeled training data is
randomly sampled and breeding process selects nodes by calculating degree of
their consistency on different sets of sampled data. This method supervises
unsupervised training by evolutionary process. We also introduce modified
Restricted Boltzmann Machine which contains repulsive force among nodes in a
neural network and it contributes to isolate network nodes each other to avoid
accidental degeneration of nodes by evolutionary process. These new methods are
applied to document classification problem and it results better accuracy than
a traditional fully supervised classifier implemented with linear regression
algorithm.
| 0 | 0 | 0 | 1 | 0 | 0 |
Inductive Representation Learning in Large Attributed Graphs | Graphs (networks) are ubiquitous and allow us to model entities (nodes) and
the dependencies (edges) between them. Learning a useful feature representation
from graph data lies at the heart and success of many machine learning tasks
such as classification, anomaly detection, link prediction, among many others.
Many existing techniques use random walks as a basis for learning features or
estimating the parameters of a graph model for a downstream prediction task.
Examples include recent node embedding methods such as DeepWalk, node2vec, as
well as graph-based deep learning algorithms. However, the simple random walk
used by these methods is fundamentally tied to the identity of the node. This
has three main disadvantages. First, these approaches are inherently
transductive and do not generalize to unseen nodes and other graphs. Second,
they are not space-efficient as a feature vector is learned for each node which
is impractical for large graphs. Third, most of these approaches lack support
for attributed graphs.
To make these methods more generally applicable, we propose a framework for
inductive network representation learning based on the notion of attributed
random walk that is not tied to node identity and is instead based on learning
a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute
vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for
generalizing existing methods such as DeepWalk, node2vec, and many other
previous methods that leverage traditional random walks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Evidence for Two Hot Jupiter Formation Paths | Disk migration and high-eccentricity migration are two well-studied theories
to explain the formation of hot Jupiters. The former predicts that these
planets can migrate up until the planet-star Roche separation ($a_{Roche}$) and
the latter predicts they will tidally circularize at a minimum distance of
2$a_{Roche}$. Considering long-running radial velocity and transit surveys have
identified a couple hundred hot Jupiters to date, we can revisit the classic
question of hot Jupiter formation in a data-driven manner. We approach this
problem using data from several exoplanet surveys (radial velocity, Kepler,
HAT, and WASP) allowing for either a single population or a mixture of
populations associated with these formation channels, and applying a
hierarchical Bayesian mixture model of truncated power laws of the form
$x^{\gamma-1}$ to constrain the population-level parameters of interest (e.g.,
location of inner edges, $\gamma$, mixture fractions). Within the limitations
of our chosen models, we find the current radial velocity and Kepler sample of
hot Jupiters can be well explained with a single truncated power law
distribution with a lower cutoff near 2$a_{Roche}$, a result that still holds
after a decade, and $\gamma=-0.51\pm^{0.19}_{0.20}$. However, the HAT and WASP
data show evidence for multiple populations (Bayes factor $\approx 10^{21}$).
We find that $15\pm^{9}_{6}\%$ reside in a component consistent with disk
migration ($\gamma=-0.04\pm^{0.53}_{1.27}$) and $85\pm^{6}_{9}\%$ in one
consistent with high-eccentricity migration ($\gamma=-1.38\pm^{0.32}_{0.47}$).
We find no immediately strong connections with some observed host star
properties and speculate on how future exoplanet surveys could improve upon hot
Jupiter population inference.
| 0 | 1 | 0 | 0 | 0 | 0 |
High-resolution Spectroscopy and Spectropolarimetry of Selected Delta Scuti Pulsating Variables | The combination of photometry, spectroscopy and spectropolarimetry of the
chemically peculiar stars often aims to study the complex physical phenomena
such as stellar pulsation, chemical inhomogeneity, magnetic field and their
interplay with stellar atmosphere and circumstellar environment. The prime
objective of the present study is to determine the atmospheric parameters of a
set of Am stars to understand their evolutionary status. Atmospheric abundances
and basic parameters are determined using full spectrum fitting technique by
comparing the high-resolution spectra to the synthetic spectra. To know the
evolutionary status we derive the effective temperature and luminosity from
different methods and compare them with the literature. The location of these
stars in the H-R diagram demonstrate that all the sample stars are evolved from
the Zero-Age-Main-Sequence towards Terminal-Age-Main-Sequence and occupy the
region of $\delta$ Sct instability strip. The abundance analysis shows that the
light elements e.g. Ca and Sc are underabundant while iron peak elements such
as Ba, Ce etc. are overabundant and these chemical properties are typical for
Am stars. The results obtained from the spectropolarimetric analysis shows that
the longitudinal magnetic fields in all the studied stars are negligible that
gives further support their Am class of peculiarity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Majority and Minority Voted Redundancy for Safety-Critical Applications | A new majority and minority voted redundancy (MMR) scheme is proposed that
can provide the same degree of fault tolerance as N-modular redundancy (NMR)
but with fewer function units and a less sophisticated voting logic. Example
NMR and MMR circuits were implemented using a 32/28nm CMOS process and
compared. The results show that MMR circuits dissipate less power, occupy less
area, and encounter less critical path delay than the corresponding NMR
circuits while providing the same degree of fault tolerance. Hence the MMR is a
promising alternative to the NMR to efficiently implement high levels of
redundancy in safety-critical applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tight Analysis for the 3-Majority Consensus Dynamics | We present a tight analysis for the well-studied randomized 3-majority
dynamics of stabilizing consensus, hence answering the main open question of
Becchetti et al. [SODA'16].
Consider a distributed system of n nodes, each initially holding an opinion
in {1, 2, ..., k}. The system should converge to a setting where all
(non-corrupted) nodes hold the same opinion. This consensus opinion should be
\emph{valid}, meaning that it should be among the initially supported opinions,
and the (fast) convergence should happen even in the presence of a malicious
adversary who can corrupt a bounded number of nodes per round and in particular
modify their opinions. A well-studied distributed algorithm for this problem is
the 3-majority dynamics, which works as follows: per round, each node gathers
three opinions --- say by taking its own and two of other nodes sampled at
random --- and then it sets its opinion equal to the majority of this set; ties
are broken arbitrarily, e.g., towards the node's own opinion.
Becchetti et al. [SODA'16] showed that the 3-majority dynamics converges to
consensus in O((k^2\sqrt{\log n} + k\log n)(k+\log n)) rounds, even in the
presence of a limited adversary. We prove that, even with a stronger adversary,
the convergence happens within O(k\log n) rounds. This bound is known to be
optimal.
| 1 | 0 | 0 | 0 | 0 | 0 |
Large-scale chromosome folding versus genomic DNA sequences: A discrete double Fourier transform technique | Using state-of-the-art techniques combining imaging methods and
high-throughput genomic mapping tools leaded to the significant progress in
detailing chromosome architecture of various organisms. However, a gap still
remains between the rapidly growing structural data on the chromosome folding
and the large-scale genome organization. Could a part of information on the
chromosome folding be obtained directly from underlying genomic DNA sequences
abundantly stored in the databanks? To answer this question, we developed an
original discrete double Fourier transform (DDFT). DDFT serves for the
detection of large-scale genome regularities associated with domains/units at
the different levels of hierarchical chromosome folding. The method is
versatile and can be applied to both genomic DNA sequences and corresponding
physico-chemical parameters such as base-pairing free energy. The latter
characteristic is closely related to the replication and transcription and can
also be used for the assessment of temperature or supercoiling effects on the
chromosome folding. We tested the method on the genome of Escherichia coli K-12
and found good correspondence with the annotated domains/units established
experimentally. As a brief illustration of further abilities of DDFT, the study
of large-scale genome organization for bacteriophage PHIX174 and bacterium
Caulobacter crescentus was also added. The combined experimental, modeling, and
bioinformatic DDFT analysis should yield more complete knowledge on the
chromosome architecture and genome organization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Superlinear scaling in the urban system of England of Wales. A comparison with US cities | According to the theory of urban scaling, urban indicators scale with city
size in a predictable fashion. In particular, indicators of social and economic
productivity are expected to have a superlinear relation. This behavior was
verified for many urban systems, but recent findings suggest that this pattern
may not be valid for England and Wales (E&W), where income has a linear
relation with city size. This finding raises the question of whether the cities
of E&W exhibit any superlinear relation with respect to quantities such as the
level of education and occupational groups. In this paper, we evaluate the
scaling of educational and occupational groups of E&W to see if we can detect
superlinear relations in the number of educated and better-paid persons. As E&W
may be unique in its linear scaling of income, we complement our analysis by
comparing it to the urban system of the United States (US), a country for which
superlinear scaling of income has already been demonstrated. To make the two
urban systems comparable, we define the urban systems of both countries using
the same method and test the sensitivity of our results to changes in the
boundaries of cities. We find that cities of E&W exhibit patterns of
superlinear scaling with respect to education and certain categories of
better-paid occupations. However, the tendency of such groups to have
superlinear scaling seems to be more consistent in the US. We show that while
the educational and occupational distributions of US cities can partly explain
the superlinear scaling of earnings, the distribution leads to a linear scaling
of earnings in E&W.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extensile actomyosin? | Living cells move thanks to assemblies of actin filaments and myosin motors
that range from very organized striated muscle tissue to disordered
intracellular bundles. The mechanisms powering these disordered structures are
debated, and all models studied so far predict that they are contractile. We
reexamine this prediction through a theoretical treatment of the interplay of
three well-characterized internal dynamical processes in actomyosin bundles:
actin treadmilling, the attachement-detachment dynamics of myosin and that of
crosslinking proteins. We show that these processes enable an extensive control
of the bundle's active mechanics, including reversals of the filaments'
apparent velocities and the possibility of generating extension instead of
contraction. These effects offer a new perspective on well-studied in vivo
systems, as well as a robust criterion to experimentally elucidate the
underpinnings of actomyosin activity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Deep Learning Models for Psychological State Prediction using Smartphone Data: Challenges and Opportunities | There is an increasing interest in exploiting mobile sensing technologies and
machine learning techniques for mental health monitoring and intervention.
Researchers have effectively used contextual information, such as mobility,
communication and mobile phone usage patterns for quantifying individuals' mood
and wellbeing. In this paper, we investigate the effectiveness of neural
network models for predicting users' level of stress by using the location
information collected by smartphones. We characterize the mobility patterns of
individuals using the GPS metrics presented in the literature and employ these
metrics as input to the network. We evaluate our approach on the open-source
StudentLife dataset. Moreover, we discuss the challenges and trade-offs
involved in building machine learning models for digital mental health and
highlight potential future work in this direction.
| 1 | 0 | 0 | 1 | 0 | 0 |
Direct Multitype Cardiac Indices Estimation via Joint Representation and Regression Learning | Cardiac indices estimation is of great importance during identification and
diagnosis of cardiac disease in clinical routine. However, estimation of
multitype cardiac indices with consistently reliable and high accuracy is still
a great challenge due to the high variability of cardiac structures and
complexity of temporal dynamics in cardiac MR sequences. While efforts have
been devoted into cardiac volumes estimation through feature engineering
followed by a independent regression model, these methods suffer from the
vulnerable feature representation and incompatible regression model. In this
paper, we propose a semi-automated method for multitype cardiac indices
estimation. After manual labelling of two landmarks for ROI cropping, an
integrated deep neural network Indices-Net is designed to jointly learn the
representation and regression models. It comprises two tightly-coupled
networks: a deep convolution autoencoder (DCAE) for cardiac image
representation, and a multiple output convolution neural network (CNN) for
indices regression. Joint learning of the two networks effectively enhances the
expressiveness of image representation with respect to cardiac indices, and the
compatibility between image representation and indices regression, thus leading
to accurate and reliable estimations for all the cardiac indices.
When applied with five-fold cross validation on MR images of 145 subjects,
Indices-Net achieves consistently low estimation error for LV wall thicknesses
(1.44$\pm$0.71mm) and areas of cavity and myocardium (204$\pm$133mm$^2$). It
outperforms, with significant error reductions, segmentation method (55.1% and
17.4%) and two-phase direct volume-only methods (12.7% and 14.6%) for wall
thicknesses and areas, respectively. These advantages endow the proposed method
a great potential in clinical cardiac function assessment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Noncommutative hyperbolic metrics | We characterize certain noncommutative domains in terms of noncommutative
holomorphic equivalence via a pseudometric that we define in purely algebraic
terms. We prove some properties of this pseudometric and provide an application
to free probability.
| 0 | 0 | 1 | 0 | 0 | 0 |
Interpretable Low-Dimensional Regression via Data-Adaptive Smoothing | We consider the problem of estimating a regression function in the common
situation where the number of features is small, where interpretability of the
model is a high priority, and where simple linear or additive models fail to
provide adequate performance. To address this problem, we present Maximum
Variance Total Variation denoising (MVTV), an approach that is conceptually
related both to CART and to the more recent CRISP algorithm, a state-of-the-art
alternative method for interpretable nonlinear regression. MVTV divides the
feature space into blocks of constant value and fits the value of all blocks
jointly via a convex optimization routine. Our method is fully data-adaptive,
in that it incorporates highly robust routines for tuning all hyperparameters
automatically. We compare our approach against CART and CRISP via both a
complexity-accuracy tradeoff metric and a human study, demonstrating that that
MVTV is a more powerful and interpretable method.
| 0 | 0 | 0 | 1 | 0 | 0 |
Basin stability for chimera states | Chimera states, namely complex spatiotemporal patterns that consist of
coexisting domains of spatially coherent and incoherent dynamics, are
investigated in a network of coupled identical oscillators. These intriguing
spatiotemporal patterns were first reported in nonlocally coupled phase
oscillators, and it was shown that such mixed type behavior occurs only for
specific initial conditions in nonlocally and globally coupled networks. The
influence of initial conditions on chimera states has remained a fundamental
problem since their discovery. In this report, we investigate the robustness of
chimera states together with incoherent and coherent states in dependence on
the initial conditions. For this, we use the basin stability method which is
related to the volume of the basin of attraction, and we consider nonlocally
and globally coupled time-delayed Mackey-Glass oscillators as example.
Previously, it was shown that the existence of chimera states can be
characterized by mean phase velocity and a statistical measure, such as the
strength of incoherence, by using well prepared initial conditions. Here we
show further how the coexistence of different dynamical states can be
identified and quantified by means of the basin stability measure over a wide
range of the parameter space.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Usage of Databases of Educational Materials in Macedonian Education | Technologies have become important part of our lives. The steps for
introducing ICTs in education vary from country to country. The Republic of
Macedonia has invested with a lot in installment of hardware and software in
education and in teacher training. This research was aiming to determine the
situation of usage of databases of digital educational materials and to define
recommendation for future improvements. Teachers from urban schools were
interviewed with a questionnaire. The findings are several: only part of the
interviewed teachers had experience with databases of educational materials;
all teachers still need capacity building activities focusing exactly on the
use and benefits from databases of educational materials; preferably capacity
building materials to be in Macedonian language; technical support and
upgrading of software and materials should be performed on a regular basis.
Most of the findings can be applied at both national and international level -
with all this implemented, application of ICT in education will have much
bigger positive impact.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood Matching | We study the neural-linear bandit model for solving sequential
decision-making problems with high dimensional side information. Neural-linear
bandits leverage the representation power of deep neural networks and combine
it with efficient exploration mechanisms, designed for linear contextual
bandits, on top of the last hidden layer. Since the representation is being
optimized during learning, information regarding exploration with "old"
features is lost. Here, we propose the first limited memory neural-linear
bandit that is resilient to this phenomenon, which we term catastrophic
forgetting. We evaluate our method on a variety of real-world data sets,
including regression, classification, and sentiment analysis, and observe that
our algorithm is resilient to catastrophic forgetting and achieves superior
performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Breaking mean-motion resonances during Type I planet migration | We present two-dimensional hydrodynamical simulations of pairs of planets
migrating simultaneously in the Type I regime in a protoplanetary disc.
Convergent migration naturally leads to the trapping of these planets in
mean-motion resonances. Once in resonance the planets' eccentricity grows
rapidly, and disc-planet torques cause the planets to escape resonance on a
time-scale of a few hundred orbits. The effect is more pronounced in highly
viscous discs, but operates efficiently even in inviscid discs. We attribute
this resonance-breaking to overstable librations driven by moderate
eccentricity damping, but find that this mechanism operates differently in
hydrodynamic simulations than in previous analytic calculations. Planets
escaping resonance in this manner can potentially explain the observed paucity
of resonances in Kepler multi-transiting systems, and we suggest that
simultaneous disc-driven migration remains the most plausible means of
assembling tightly-packed planetary systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Stretch to Stray on Time: Resonant Length of Random Walks in a Transient | First-passage times in random walks have a vast number of diverse
applications in physics, chemistry, biology, and finance. In general,
environmental conditions for a stochastic process are not constant on the time
scale of the average first-passage time, or control might be applied to reduce
noise. We investigate moments of the first-passage time distribution under a
transient describing relaxation of environmental conditions. We solve the
Laplace-transformed (generalized) master equation analytically using a novel
method that is applicable to general state schemes. The first-passage time from
one end to the other of a linear chain of states is our application for the
solutions. The dependence of its average on the relaxation rate obeys a power
law for slow transients. The exponent $\nu$ depends on the chain length $N$
like $\nu=-N/(N+1)$ to leading order. Slow transients substantially reduce the
noise of first-passage times expressed as the coefficient of variation (CV),
even if the average first-passage time is much longer than the transient. The
CV has a pronounced minimum for some lengths, which we call resonant lengths.
These results also suggest a simple and efficient noise control strategy, and
are closely related to the timing of repetitive excitations, coherence
resonance and information transmission by noisy excitable systems. A resonant
number of steps from the inhibited state to the excitation threshold and slow
recovery from negative feedback provide optimal timing noise reduction and
information transmission.
| 0 | 0 | 0 | 0 | 1 | 1 |
Hierarchical Model for Long-term Video Prediction | Video prediction has been an active topic of research in the past few years.
Many algorithms focus on pixel-level predictions, which generates results that
blur and disintegrate within a few frames. In this project, we use a
hierarchical approach for long-term video prediction. We aim at estimating
high-level structure in the input frame first, then predict how that structure
grows in the future. Finally, we use an image analogy network to recover a
realistic image from the predicted structure. Our method is largely adopted
from the work by Villegas et al. The method is built with a combination of
LSTMs and analogy-based convolutional auto-encoder networks. Additionally, in
order to generate more realistic frame predictions, we also adopt adversarial
loss. We evaluate our method on the Penn Action dataset, and demonstrate good
results on high-level long-term structure prediction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Classical Music Clustering Based on Acoustic Features | In this paper we cluster 330 classical music pieces collected from MusicNet
database based on their musical note sequence. We use shingling and chord
trajectory matrices to create signature for each music piece and performed
spectral clustering to find the clusters. Based on different resolution, the
output clusters distinctively indicate composition from different classical
music era and different composing style of the musicians.
| 1 | 0 | 0 | 0 | 0 | 0 |
Many-Body-Localization : Strong Disorder perturbative approach for the Local Integrals of Motion | For random quantum spin models, the strong disorder perturbative expansion of
the Local Integrals of Motion (LIOMs) around the real-spin operators is
revisited. The emphasis is on the links with other properties of the
Many-Body-Localized phase, in particular the memory in the dynamics of the
local magnetizations and the statistics of matrix elements of local operators
in the eigenstate basis. Finally, this approach is applied to analyze the
Many-Body-Localization transition in a toy model studied previously from the
point of view of the entanglement entropy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semi-supervised Learning for Discrete Choice Models | We introduce a semi-supervised discrete choice model to calibrate discrete
choice models when relatively few requests have both choice sets and stated
preferences but the majority only have the choice sets. Two classic
semi-supervised learning algorithms, the expectation maximization algorithm and
the cluster-and-label algorithm, have been adapted to our choice modeling
problem setting. We also develop two new algorithms based on the
cluster-and-label algorithm. The new algorithms use the Bayesian Information
Criterion to evaluate a clustering setting to automatically adjust the number
of clusters. Two computational studies including a hotel booking case and a
large-scale airline itinerary shopping case are presented to evaluate the
prediction accuracy and computational effort of the proposed algorithms.
Algorithmic recommendations are rendered under various scenarios.
| 0 | 0 | 0 | 1 | 0 | 0 |
An Analytic Criterion for Turbulent Disruption of Planetary Resonances | Mean motion commensurabilities in multi-planet systems are an expected
outcome of protoplanetary disk-driven migration, and their relative dearth in
the observational data presents an important challenge to current models of
planet formation and dynamical evolution. One natural mechanism that can lead
to the dissolution of commensurabilities is stochastic orbital forcing, induced
by turbulent density fluctuations within the nebula. While this process is
qualitatively promising, the conditions under which mean motion resonances can
be broken are not well understood. In this work, we derive a simple analytic
criterion that elucidates the relationship among the physical parameters of the
system, and find the conditions necessary to drive planets out of resonance.
Subsequently, we confirm our findings with numerical integrations carried out
in the perturbative regime, as well as direct N-body simulations. Our
calculations suggest that turbulent resonance disruption depends most
sensitively on the planet-star mass ratio. Specifically, for a disk with
properties comparable to the early solar nebula with $\alpha=0.01$, only planet
pairs with cumulative mass ratios smaller than
$(m_1+m_2)/M\lesssim10^{-5}\sim3M_{\oplus}/M_{\odot}$ are susceptible to
breaking resonance at semi-major axis of order $a\sim0.1\,$AU. Although
turbulence can sometimes compromise resonant pairs, an additional mechanism
(such as suppression of resonance capture probability through disk
eccentricity) is required to adequately explain the largely non-resonant
orbital architectures of extrasolar planetary systems.
| 0 | 1 | 1 | 0 | 0 | 0 |
Electronic structure of transferred graphene/h-BN van der Waals heterostructures with nonzero stacking angles by nano-ARPES | In van der Waals heterostructures, the periodic potential from the Moiré
superlattice can be used as a control knob to modulate the electronic structure
of the constituent materials. Here we present a nanoscale angle-resolved
photoemission spectroscopy (Nano-ARPES) study of transferred graphene/h-BN
heterostructures with two different stacking angles of 2.4° and 4.3°
respectively. Our measurements reveal six replicas of graphene Dirac cones at
the superlattice Brillouin zone (SBZ) centers. The size of the SBZ and its
relative rotation angle to the graphene BZ are in good agreement with Moiré
superlattice period extracted from atomic force microscopy (AFM) measurements.
Comparison to epitaxial graphene/h-BN with 0° stacking angles suggests
that the interaction between graphene and h-BN decreases with increasing
stacking angle.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automatic sequences and generalised polynomials | We conjecture that bounded generalised polynomial functions cannot be
generated by finite automata, except for the trivial case when they are
ultimately periodic.
Using methods from ergodic theory, we are able to partially resolve this
conjecture, proving that any hypothetical counterexample is periodic away from
a very sparse and structured set.
In particular, we show that for a polynomial $p(n)$ with at least one
irrational coefficient (except for the constant one) and integer $m\geq 2$, the
sequence $\lfloor p(n) \rfloor \bmod{m}$ is never automatic.
We also prove that the conjecture is equivalent to the claim that the set of
powers of an integer $k\geq 2$ is not given by a generalised polynomial.
| 1 | 0 | 1 | 0 | 0 | 0 |
Robust Regulation of Infinite-Dimensional Port-Hamiltonian Systems | We will give general sufficient conditions under which a controller achieves
robust regulation for a boundary control and observation system. Utilizing
these conditions we construct a minimal order robust controller for an
arbitrary order impedance passive linear port-Hamiltonian system. The
theoretical results are illustrated with a numerical example where we implement
a controller for a one-dimensional Euler-Bernoulli beam with boundary controls
and boundary observations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Existence of travelling waves and high activation energy limits for a onedimensional thermo-diffusive lean spray flame model | We provide a mathematical analysis of a thermo-diffusive combustion model of
lean spray flames, for which we prove the existence of travelling waves. In the
high activation energy singular limit we show the existence of two distinct
combustion regimes with a sharp transition -- the diffusion limited regime and
the vaporisation controlled regime. The latter is specific to spray flames with
slow enough vaporisation. We give a complete characterisation of these regimes,
including explicit velocities, profiles, and upper estimate of the size of the
internal combustion layer. Our model is on the one hand simple enough to allow
for explicit asymptotic limits and on the other hand rich enough to capture
some particular aspects of spray combustion. Finally, we briefly discuss the
cases where the vaporisation is infinitely fast, or where the spray is
polydisperse.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Motion of Small Bodies in Space-time | We consider the motion of small bodies in general relativity. The key result
captures a sense in which such bodies follow timelike geodesics (or, in the
case of charged bodies, Lorentz-force curves). This result clarifies the
relationship between approaches that model such bodies as distributions
supported on a curve, and those that employ smooth fields supported in small
neighborhoods of a curve. This result also applies to "bodies" constructed from
wave packets of Maxwell or Klein-Gordon fields. There follows a simple and
precise formulation of the optical limit for Maxwell fields.
| 0 | 1 | 1 | 0 | 0 | 0 |
Extracting urban impervious surface from GF-1 imagery using one-class classifiers | Impervious surface area is a direct consequence of the urbanization, which
also plays an important role in urban planning and environmental management.
With the rapidly technical development of remote sensing, monitoring urban
impervious surface via high spatial resolution (HSR) images has attracted
unprecedented attention recently. Traditional multi-classes models are
inefficient for impervious surface extraction because it requires labeling all
needed and unneeded classes that occur in the image exhaustively. Therefore, we
need to find a reliable one-class model to classify one specific land cover
type without labeling other classes. In this study, we investigate several
one-class classifiers, such as Presence and Background Learning (PBL), Positive
Unlabeled Learning (PUL), OCSVM, BSVM and MAXENT, to extract urban impervious
surface area using high spatial resolution imagery of GF-1, China's new
generation of high spatial remote sensing satellite, and evaluate the
classification accuracy based on artificial interpretation results. Compared to
traditional multi-classes classifiers (ANN and SVM), the experimental results
indicate that PBL and PUL provide higher classification accuracy, which is
similar to the accuracy provided by ANN model. Meanwhile, PBL and PUL
outperforms OCSVM, BSVM, MAXENT and SVM models. Hence, the one-class
classifiers only need a small set of specific samples to train models without
losing predictive accuracy, which is supposed to gain more attention on urban
impervious surface extraction or other one specific land cover type.
| 1 | 0 | 0 | 0 | 0 | 0 |
Positive and nodal single-layered solutions to supercritical elliptic problems above the higher critical exponents | We study the problem% \[ -\Delta v+\lambda v=| v| ^{p-2}v\text{ in }\Omega
,\text{\qquad}v=0\text{ on $\partial\Omega$},\text{ }% \] for
$\lambda\in\mathbb{R}$ and supercritical exponents $p,$ in domains of the form%
\[ \Omega:=\{(y,z)\in\mathbb{R}^{N-m-1}\times\mathbb{R}^{m+1}:(y,| z|
)\in\Theta\}, \] where $m\geq1,$ $N-m\geq3,$ and $\Theta$ is a bounded domain
in $\mathbb{R}% ^{N-m}$ whose closure is contained in
$\mathbb{R}^{N-m-1}\times(0,\infty)$. Under some symmetry assumptions on
$\Theta$, we show that this problem has infinitely many solutions for every
$\lambda$ in an interval which contains $[0,\infty)$ and $p>2$ up to some
number which is larger than the $(m+1)^{st}$ critical exponent
$2_{N,m}^{\ast}:=\frac{2(N-m)}{N-m-2}$. We also exhibit domains with a
shrinking hole, in which there are a positive and a nodal solution which
concentrate on a sphere, developing a single layer that blows up at an
$m$-dimensional sphere contained in the boundary of $\Omega,$ as the hole
shrinks and $p\rightarrow2_{N,m}^{\ast}$ from above. The limit profile of the
positive solution, in the transversal direction to the sphere of concentration,
is a rescaling of the standard bubble, whereas that of the nodal solution is a
rescaling of a nonradial sign-changing solution to the problem% \[ -\Delta u=|
u| ^{2_{n}^{\ast}-2}u,\text{\qquad}u\in D^{1,2}(\mathbb{R}^{n}), \] where
$2_{n}^{\ast}:=\frac{2n}{n-2}$ is the critical exponent in dimension
$n.$\medskip
| 0 | 0 | 1 | 0 | 0 | 0 |
Robust, Deep and Inductive Anomaly Detection | PCA is a classical statistical technique whose simplicity and maturity has
seen it find widespread use as an anomaly detection technique. However, it is
limited in this regard by being sensitive to gross perturbations of the input,
and by seeking a linear subspace that captures normal behaviour. The first
issue has been dealt with by robust PCA, a variant of PCA that explicitly
allows for some data points to be arbitrarily corrupted, however, this does not
resolve the second issue, and indeed introduces the new issue that one can no
longer inductively find anomalies on a test set. This paper addresses both
issues in a single model, the robust autoencoder. This method learns a
nonlinear subspace that captures the majority of data points, while allowing
for some data to have arbitrary corruption. The model is simple to train and
leverages recent advances in the optimisation of deep neural networks.
Experiments on a range of real-world datasets highlight the model's
effectiveness.
| 1 | 0 | 0 | 1 | 0 | 0 |
Understanding Organizational Approach towards End User Privacy | End user privacy is a critical concern for all organizations that collect,
process and store user data as a part of their business. Privacy concerned
users, regulatory bodies and privacy experts continuously demand organizations
provide users with privacy protection. Current research lacks an understanding
of organizational characteristics that affect an organization's motivation
towards user privacy. This has resulted in a "one solution fits all" approach,
which is incapable of providing sustainable solutions for organizational issues
related to user privacy. In this work, we have empirically investigated 40
diverse organizations on their motivations and approaches towards user privacy.
Resources such as newspaper articles, privacy policies and internal privacy
reports that display information about organizational motivations and
approaches towards user privacy were used in the study. We could observe
organizations to have two primary motivations to provide end users with privacy
as voluntary driven inherent motivation, and risk driven compliance motivation.
Building up on these findings we developed a taxonomy of organizational privacy
approaches and further explored the taxonomy through limited exclusive
interviews. With his work, we encourage authorities and scholars to understand
organizational characteristics that define an organization's approach towards
privacy, in order to effectively communicate regulations that enforce and
encourage organizations to consider privacy within their business practices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Baryonic impact on the dark matter orbital properties of Milky Way-sized haloes | We study the orbital properties of dark matter haloes by combining a spectral
method and cosmological simulations of Milky Way-sized galaxies. We compare the
dynamics and orbits of individual dark matter particles from both hydrodynamic
and $N$-body simulations, and find that the fraction of box, tube and resonant
orbits of the dark matter halo decreases significantly due to the effects of
baryons. In particular, the central region of the dark matter halo in the
hydrodynamic simulation is dominated by regular, short-axis tube orbits, in
contrast to the chaotic, box and thin orbits dominant in the $N$-body run. This
leads to a more spherical dark matter halo in the hydrodynamic run compared to
a prolate one as commonly seen in the $N$-body simulations. Furthermore, by
using a kernel based density estimator, we compare the coarse-grained
phase-space densities of dark matter haloes in both simulations and find that
it is lower by $\sim0.5$ dex in the hydrodynamic run due to changes in the
angular momentum distribution, which indicates that the baryonic process that
affects the dark matter is irreversible. Our results imply that baryons play an
important role in determining the shape, kinematics and phase-space density of
dark matter haloes in galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extreme Value Analysis Without the Largest Values: What Can Be Done? | In this paper we are concerned with the analysis of heavy-tailed data when a
portion of the extreme values is unavailable. This research was motivated by an
analysis of the degree distributions in a large social network. The degree
distributions of such networks tend to have power law behavior in the tails. We
focus on the Hill estimator, which plays a starring role in heavy-tailed
modeling. The Hill estimator for this data exhibited a smooth and increasing
"sample path" as a function of the number of upper order statistics used in
constructing the estimator. This behavior became more apparent as we
artificially removed more of the upper order statistics. Building on this
observation we introduce a new version of the Hill estimator. It is a function
of the number of the upper order statistics used in the estimation, but also
depends on the number of unavailable extreme values. We establish functional
convergence of the normalized Hill estimator to a Gaussian process. An
estimation procedure is developed based on the limit theory to estimate the
number of missing extremes and extreme value parameters including the tail
index and the bias of Hill's estimator. We illustrate how this approach works
in both simulations and real data examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Controlling light in complex media beyond the acoustic diffraction-limit using the acousto-optic transmission matrix | Studying the internal structure of complex samples with light is an important
task, but a difficult challenge due to light scattering. While the complex
optical distortions induced by multiple scattering can be effectively undone
with the knowledge of the medium's scattering-matrix, this matrix is generally
unknown, and cannot be measured with high resolution without the presence of
fluorescent or absorbing probes at all points of interest. To overcome these
limitations, we introduce here the concept of the acousto-optic transmission
matrix (AOTM). Taking advantage of the near scattering-free propagation of
ultrasound in complex samples, we noninvasively measure an
ultrasonically-encoded, spatially-resolved, optical scattering-matrix. We
demonstrate that a singular value decomposition analysis of the AOTM, acquired
using a single or multiple ultrasonic beams, allows controlled optical focusing
beyond the acoustic diffraction limit in scattering media. Our approach
provides a generalized framework for analyzing acousto-optical experiments, and
for noninvasive, high-resolution study of complex media.
| 0 | 1 | 0 | 0 | 0 | 0 |
Uniqueness and stability of Ricci flow through singularities | We verify a conjecture of Perelman, which states that there exists a
canonical Ricci flow through singularities starting from an arbitrary compact
Riemannian 3-manifold. Our main result is a uniqueness theorem for such flows,
which, together with an earlier existence theorem of Lott and the second named
author, implies Perelman's conjecture. We also show that this flow through
singularities depends continuously on its initial condition and that it may be
obtained as a limit of Ricci flows with surgery.
Our results have applications to the study of diffeomorphism groups of three
manifolds --- in particular to the Generalized Smale Conjecture --- which will
appear in a subsequent paper.
| 0 | 0 | 1 | 0 | 0 | 0 |
Reciprocal space engineering with hyperuniform gold metasurfaces | Hyperuniform geometries feature correlated disordered topologies which follow
from a tailored k-space design. Here we study gold plasmonic hyperuniform
metasurfaces and we report evidence of the effectiveness of k-space engineering
on both light scattering and light emission experiments. The metasurfaces
possess interesting directional emission properties which are revealed by
momentum spectroscopy as diffraction and fluorescence emission rings at
size-specific k-vectors. The opening of these rotational-symmetric patterns
scales with the hyperuniform correlation length parameter as predicted via the
spectral function method.
| 0 | 1 | 0 | 0 | 0 | 0 |
STFT spectral loss for training a neural speech waveform model | This paper proposes a new loss using short-time Fourier transform (STFT)
spectra for the aim of training a high-performance neural speech waveform model
that predicts raw continuous speech waveform samples directly. Not only
amplitude spectra but also phase spectra obtained from generated speech
waveforms are used to calculate the proposed loss. We also mathematically show
that training of the waveform model on the basis of the proposed loss can be
interpreted as maximum likelihood training that assumes the amplitude and phase
spectra of generated speech waveforms following Gaussian and von Mises
distributions, respectively. Furthermore, this paper presents a simple network
architecture as the speech waveform model, which is composed of uni-directional
long short-term memories (LSTMs) and an auto-regressive structure. Experimental
results showed that the proposed neural model synthesized high-quality speech
waveforms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spectral Properties of Continuum Fibonacci Schrödinger Operators | We study continuum Schrödinger operators on the real line whose potentials
are comprised of two compactly supported square-integrable functions
concatenated according to an element of the Fibonacci substitution subshift
over two letters. We show that the Hausdorff dimension of the spectrum tends to
one in the small-coupling and high-energy regimes, regardless of the shape of
the potential pieces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Species tree inference from genomic sequences using the log-det distance | The log-det distance between two aligned DNA sequences was introduced as a
tool for statistically consistent inference of a gene tree under simple
non-mixture models of sequence evolution. Here we prove that the log-det
distance, coupled with a distance-based tree construction method, also permits
consistent inference of species trees under mixture models appropriate to
aligned genomic-scale sequences data. Data may include sites from many genetic
loci, which evolved on different gene trees due to incomplete lineage sorting
on an ultrametric species tree, with different time-reversible substitution
processes. The simplicity and speed of distance-based inference suggests
log-det based methods should serve as benchmarks for judging more elaborate and
computationally-intensive species trees inference methods.
| 0 | 0 | 0 | 0 | 1 | 0 |
Structure preserving schemes for nonlinear Fokker-Planck equations and applications | In this paper we focus on the construction of numerical schemes for nonlinear
Fokker-Planck equations that preserve the structural properties, like non
negativity of the solution, entropy dissipation and large time behavior. The
methods here developed are second order accurate, they do not require any
restriction on the mesh size and are capable to capture the asymptotic steady
states with arbitrary accuracy. These properties are essential for a correct
description of the underlying physical problem. Applications of the schemes to
several nonlinear Fokker-Planck equations with nonlocal terms describing
emerging collective behavior in socio-economic and life sciences are presented.
| 0 | 1 | 1 | 0 | 0 | 0 |
A Markov decision process approach to optimizing cancer therapy using multiple modalities | There are several different modalities, e.g., surgery, chemotherapy, and
radiotherapy, that are currently used to treat cancer. It is common practice to
use a combination of these modalities to maximize clinical outcomes, which are
often measured by a balance between maximizing tumor damage and minimizing
normal tissue side effects due to treatment. However, multi-modality treatment
policies are mostly empirical in current practice, and are therefore subject to
individual clinicians' experiences and intuition. We present a novel
formulation of optimal multi-modality cancer management using a finite-horizon
Markov decision process approach. Specifically, at each decision epoch, the
clinician chooses an optimal treatment modality based on the patient's observed
state, which we define as a combination of tumor progression and normal tissue
side effect. Treatment modalities are categorized as (1) Type 1, which has a
high risk and high reward, but is restricted in the frequency of administration
during a treatment course, (2) Type 2, which has a lower risk and lower reward
than Type 1, but may be repeated without restriction, and (3) Type 3, no
treatment (surveillance), which has the possibility of reducing normal tissue
side effect at the risk of worsening tumor progression. Numerical simulations
using various intuitive, concave reward functions show the structural insights
of optimal policies and demonstrate the potential applications of using a
rigorous approach to optimizing multi-modality cancer management.
| 0 | 1 | 1 | 0 | 0 | 0 |
Complex Contagions with Timers | A great deal of effort has gone into trying to model social influence ---
including the spread of behavior, norms, and ideas --- on networks. Most models
of social influence tend to assume that individuals react to changes in the
states of their neighbors without any time delay, but this is often not true in
social contexts, where (for various reasons) different agents can have
different response times. To examine such situations, we introduce the idea of
a timer into threshold models of social influence. The presence of timers on
nodes delays the adoption --- i.e., change of state --- of each agent, which in
turn delays the adoptions of its neighbors. With a homogeneous-distributed
timer, in which all nodes exhibit the same amount of delay, adoption delays are
also homogeneous, so the adoption order of nodes remains the same. However,
heterogeneously-distributed timers can change the adoption order of nodes and
hence the "adoption paths" through which state changes spread in a network.
Using a threshold model of social contagions, we illustrate that heterogeneous
timers can either accelerate or decelerate the spread of adoptions compared to
an analogous situation with homogeneous timers, and we investigate the
relationship of such acceleration or deceleration with respect to timer
distribution and network structure. We derive an analytical approximation for
the temporal evolution of the fraction of adopters by modifying a pair
approximation of the Watts threshold model, and we find good agreement with
numerical computations. We also examine our new timer model on networks
constructed from empirical data.
| 1 | 1 | 0 | 0 | 0 | 0 |
Random Networks, Graphical Models, and Exchangeability | We study conditional independence relationships for random networks and their
interplay with exchangeability. We show that, for finitely exchangeable network
models, the empirical subgraph densities are maximum likelihood estimates of
their theoretical counterparts. We then characterize all possible Markov
structures for finitely exchangeable random graphs, thereby identifying a new
class of Markov network models corresponding to bidirected Kneser graphs. In
particular, we demonstrate that the fundamental property of dissociatedness
corresponds to a Markov property for exchangeable networks described by
bidirected line graphs. Finally we study those exchangeable models that are
also summarized in the sense that the probability of a network only depends
onthe degree distribution, and identify a class of models that is dual to the
Markov graphs of Frank and Strauss (1986). Particular emphasis is placed on
studying consistency properties of network models under the process of forming
subnetworks and we show that the only consistent systems of Markov properties
correspond to the empty graph, the bidirected line graph of the complete graph,
and the complete graph.
| 0 | 0 | 1 | 1 | 0 | 0 |
Long-Term Inertial Navigation Aided by Dynamics of Flow Field Features | A current-aided inertial navigation framework is proposed for small
autonomous underwater vehicles in long-duration operations (> 1 hour), where
neither frequent surfacing nor consistent bottom-tracking are available. We
instantiate this concept through mid-depth, underwater navigation. This
strategy mitigates dead-reckoning uncertainty of a traditional inertial
navigation system by comparing the estimate of local, ambient flow velocity
with preloaded ocean current maps. The proposed navigation system is
implemented through a marginalized particle filter where the vehicle's states
are sequentially tracked along with sensor bias and local turbulence that is
not resolved by general flow prediction. The performance of the proposed
approach is first analyzed through Monte Carlo simulations in two artificial
background flow fields, resembling real-world ocean circulation patterns,
superposed with smaller-scale, turbulent components with Kolmogorov energy
spectrum. The current-aided navigation scheme significantly improves the
dead-reckoning performance of the vehicle even when unresolved, small-scale
flow perturbations are present. For a 6-hour navigation with an
automotive-grade inertial navigation system, the current-aided navigation
scheme results in positioning estimates with under 3% uncertainty per distance
traveled (UDT) in a turbulent, double-gyre flow field, and under 7.3% UDT in a
turbulent, meandering jet flow field. Further evaluation with field test data
and actual ocean simulation analysis demonstrates consistent performance for a
6-hour mission, positioning result with under 25% UDT for a 24-hour navigation
when provided direct heading measurements, and terminal positioning estimate
with 16% UDT at the cost of increased uncertainty at an early stage of the
navigation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Catching Zika Fever: Application of Crowdsourcing and Machine Learning for Tracking Health Misinformation on Twitter | In February 2016, World Health Organization declared the Zika outbreak a
Public Health Emergency of International Concern. With developing evidence it
can cause birth defects, and the Summer Olympics coming up in the worst
affected country, Brazil, the virus caught fire on social media. In this work,
use Zika as a case study in building a tool for tracking the misinformation
around health concerns on Twitter. We collect more than 13 million tweets --
spanning the initial reports in February 2016 and the Summer Olympics --
regarding the Zika outbreak and track rumors outlined by the World Health
Organization and Snopes fact checking website. The tool pipeline, which
incorporates health professionals, crowdsourcing, and machine learning, allows
us to capture health-related rumors around the world, as well as clarification
campaigns by reputable health organizations. In the case of Zika, we discover
an extremely bursty behavior of rumor-related topics, and show that, once the
questionable topic is detected, it is possible to identify rumor-bearing tweets
using automated techniques. Thus, we illustrate insights the proposed tools
provide into potentially harmful information on social media, allowing public
health researchers and practitioners to respond with a targeted and timely
action.
| 1 | 0 | 0 | 0 | 0 | 0 |
Monte Carlo determination of the low-energy constants for a two-dimensional spin-1 Heisenberg model with spatial anisotropy | The low-energy constants, namely the spin stiffness $\rho_s$, the staggered
magnetization density ${\cal M}_s$ per area, and the spinwave velocity $c$ of
the two-dimensional (2D) spin-1 Heisenberg model on the square and rectangular
lattices are determined using the first principles Monte Carlo method. In
particular, the studied models have antiferromagnetic couplings $J_1$ and $J_2$
in the spatial 1- and 2-directions, respectively. For each considered
$J_2/J_1$, the aspect ratio of the corresponding linear box sizes $L_2/L_1$
used in the simulations is adjusted so that the squares of the two spatial
winding numbers take the same values. In addition, the relevant finite-volume
and -temperature predictions from magnon chiral perturbation theory are
employed in extracting the numerical values of these low-energy constants. Our
results of $\rho_{s1}$ are in quantitative agreement with those obtained by the
series expansion method over a broad range of $J_2/J_1$. This in turn provides
convincing numerical evidence for the quantitative correctness of our approach.
The ${\cal M}_s$ and $c$ presented here for the spatially anisotropic models
are new and can be used as benchmarks for future related studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Explaining Parochialism: A Causal Account for Political Polarization in Changing Economic Environments | Political and social polarization are a significant cause of conflict and
poor governance in many societies, thus understanding their causes is of
considerable importance. Here we demonstrate that shifts in socialization
strategy similar to political polarization and/or identity politics could be a
constructive response to periods of apparent economic decline. We start from
the observation that economies, like ecologies are seldom at equilibrium.
Rather, they often suffer both negative and positive shocks. We show that even
where in an expanding economy, interacting with diverse out-groups can afford
benefits through innovation and exploration, if that economy contracts, a
strategy of seeking homogeneous groups can be important to maintaining
individual solvency. This is true even where the expected value of out group
interaction exceeds that of in group interactions. Our account unifies what
were previously seen as conflicting explanations: identity threat versus
economic anxiety. Our model indicates that in periods of extreme deprivation,
cooperation with diversity again becomes the best (in fact, only viable)
strategy. However, our model also shows that while polarization may increase
gradually in response to shifts in the economy, gradual decrease of
polarization may not be an available strategy; thus returning to previous
levels of cooperation may require structural change.
| 0 | 0 | 0 | 0 | 1 | 1 |
A Secular Resonant Origin for the Loneliness of Hot Jupiters | Despite decades of inquiry, the origin of giant planets residing within a few
tenths of an astronomical unit from their host stars remains unclear.
Traditionally, these objects are thought to have formed further out before
subsequently migrating inwards. However, the necessity of migration has been
recently called into question with the emergence of in-situ formation models of
close-in giant planets. Observational characterization of the transiting
sub-sample of close-in giants has revealed that "warm" Jupiters, possessing
orbital periods longer than roughly 10 days more often possess close-in,
co-transiting planetary companions than shorter period "hot" Jupiters, that are
usually lonely. This finding has previously been interpreted as evidence that
smooth, early migration or in situ formation gave rise to warm Jupiter-hosting
systems, whereas more violent, post-disk migration pathways sculpted hot
Jupiter-hosting systems. In this work, we demonstrate that both classes of
planet may arise via early migration or in-situ conglomeration, but that the
enhanced loneliness of hot Jupiters arises due to a secular resonant
interaction with the stellar quadrupole moment. Such an interaction tilts the
orbits of exterior, lower mass planets, removing them from transit surveys
where the hot Jupiter is detected. Warm Jupiter-hosting systems, in contrast,
retain their coplanarity due to the weaker influence of the host star's
quadrupolar potential relative to planet-disk interactions. In this way, hot
Jupiters and warm Jupiters are placed within a unified theoretical framework
that may be readily validated or falsified using data from upcoming missions
such as TESS.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Incremental Slicing Method for Functional Programs | Several applications of slicing require a program to be sliced with respect
to more than one slicing criterion. Program specialization, parallelization and
cohesion measurement are examples of such applications. These applications can
benefit from an incremental static slicing method in which a significant extent
of the computations for slicing with respect to one criterion could be reused
for another. In this paper, we consider the problem of incremental slicing of
functional programs. We first present a non-incremental version of the slicing
algorithm which does a polyvariant analysis 1 of functions. Since polyvariant
analyses tend to be costly, we compute a compact context-independent summary of
each function and then use this summary at the call sites of the function. The
construction of the function summary is non-trivial and helps in the
development of the incremental version. The incremental method, on the other
hand, consists of a one-time pre-computation step that uses the non-incremental
version to slice the program with respect to a fixed default slicing criterion
and processes the results further to a canonical form. Presented with an actual
slicing criterion, the incremental step involves a low-cost computation that
uses the results of the pre-computation to obtain the slice. We have
implemented a prototype of the slicer for a pure subset of Scheme, with pairs
and lists as the only algebraic data types. Our experiments show that the
incremental step of the slicer runs orders of magnitude faster than the
non-incremental version. We have also proved the correctness of our incremental
algorithm with respect to the non-incremental version.
| 1 | 0 | 0 | 0 | 0 | 0 |
Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multi-Layer Calorimeters | Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of
particle collisions to build expectations of what experimental data may look
like under different theory modeling assumptions. Petabytes of simulated data
are needed to develop analysis techniques, though they are expensive to
generate using existing algorithms and computing resources. The modeling of
detectors and the precise description of particle cascades as they interact
with the material in the calorimeter are the most computationally demanding
steps in the simulation pipeline. We therefore introduce a deep neural
network-based generative model to enable high-fidelity, fast, electromagnetic
calorimeter simulation. There are still challenges for achieving precision
across the entire phase space, but our current solution can reproduce a variety
of particle shower properties while achieving speed-up factors of up to
100,000$\times$. This opens the door to a new era of fast simulation that could
save significant computing time and disk space, while extending the reach of
physics searches and precision measurements at the LHC and beyond.
| 0 | 0 | 0 | 1 | 0 | 0 |
A study of ancient Khmer ephemerides | We study ancient Khmer ephemerides described in 1910 by the French engineer
Faraut, in order to determine whether they rely on observations carried out in
Cambodia. These ephemerides were found to be of Indian origin and have been
adapted for another longitude, most likely in Burma. A method for estimating
the date and place where the ephemerides were developed or adapted is described
and applied.
| 0 | 1 | 1 | 0 | 0 | 0 |
Auto Deep Compression by Reinforcement Learning Based Actor-Critic Structure | Model-based compression is an effective, facilitating, and expanded model of
neural network models with limited computing and low power. However,
conventional models of compression techniques utilize crafted features [2,3,12]
and explore specialized areas for exploration and design of large spaces in
terms of size, speed, and accuracy, which usually have returns Less and time is
up. This paper will effectively analyze deep auto compression (ADC) and
reinforcement learning strength in an effective sample and space design, and
improve the compression quality of the model. The results of compression of the
advanced model are obtained without any human effort and in a completely
automated way. With a 4- fold reduction in FLOP, the accuracy of 2.8% is higher
than the manual compression model for VGG-16 in ImageNet.
| 0 | 0 | 0 | 1 | 0 | 0 |
Infinite Sparse Structured Factor Analysis | Matrix factorisation methods decompose multivariate observations as linear
combinations of latent feature vectors. The Indian Buffet Process (IBP)
provides a way to model the number of latent features required for a good
approximation in terms of regularised reconstruction error. Previous work has
focussed on latent feature vectors with independent entries. We extend the
model to include nondiagonal latent covariance structures representing
characteristics such as smoothness. This is done by . Using simulations we
demonstrate that under appropriate conditions a smoothness prior helps to
recover the true latent features, while denoising more accurately. We
demonstrate our method on a real neuroimaging dataset, where computational
tractability is a sufficient challenge that the efficient strategy presented
here is essential.
| 0 | 0 | 0 | 1 | 0 | 0 |
RuntimeSearch: Ctrl+F for a Running Program | Developers often try to find occurrences of a certain term in a software
system. Traditionally, a text search is limited to static source code files. In
this paper, we introduce a simple approach, RuntimeSearch, where the given term
is searched in the values of all string expressions in a running program. When
a match is found, the program is paused and its runtime properties can be
explored with a traditional debugger. The feasibility and usefulness of
RuntimeSearch is demonstrated on a medium-sized Java project.
| 1 | 0 | 0 | 0 | 0 | 0 |
Accurate and Efficient Evaluation of Characteristic Modes | A new method to improve the accuracy and efficiency of characteristic mode
(CM) decomposition for perfectly conducting bodies is presented. The method
uses the expansion of the Green dyadic in spherical vector waves. This
expansion is utilized in the method of moments (MoM) solution of the electric
field integral equation to factorize the real part of the impedance matrix. The
factorization is then employed in the computation of CMs, which improves the
accuracy as well as the computational speed. An additional benefit is a rapid
computation of far fields. The method can easily be integrated into existing
MoM solvers. Several structures are investigated illustrating the improved
accuracy and performance of the new method.
| 0 | 1 | 0 | 0 | 0 | 0 |
MUFASA: The assembly of the red sequence | We examine the growth and evolution of quenched galaxies in the Mufasa
cosmological hydrodynamic simulations that include an evolving halo mass-based
quenching prescription, with galaxy colours computed accounting for
line-of-sight extinction to individual star particles. Mufasa reproduces the
observed present-day red sequence reasonably well, including its slope,
amplitude, and scatter. In Mufasa, the red sequence slope is driven entirely by
the steep stellar mass-stellar metallicity relation, which independently agrees
with observations. High-mass star-forming galaxies blend smoothly onto the red
sequence, indicating the lack of a well-defined green valley at M*>10^10.5 Mo.
The most massive galaxies quench the earliest and then grow very little in mass
via dry merging; they attain their high masses at earlier epochs when cold
inflows more effectively penetrate hot halos. To higher redshifts, the red
sequence becomes increasingly contaminated with massive dusty star-forming
galaxies; UVJ selection subtly but effectively separates these populations. We
then examine the evolution of the mass functions of central and satellite
galaxies split into passive and star-forming via UVJ. Massive quenched systems
show good agreement with observations out to z~2, despite not including a rapid
early quenching mode associated with mergers. However, low-mass quenched
galaxies are far too numerous at z<1 in Mufasa, indicating that Mufasa strongly
over-quenches satellites. A challenge for hydrodynamic simulations is to devise
a quenching model that produces enough early massive quenched galaxies and
keeps them quenched to z=0, while not being so strong as to over-quench
satellites; Mufasa's current scheme fails at the latter.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symmetric calorons and the rotation map | We study $SU(2)$ calorons, also known as periodic instantons, and consider
invariance under isometries of $S^1\times\mathbb{R}^3$ coupled with a
non-spatial isometry called the rotation map. In particular, we investigate the
fixed points under various cyclic symmetry groups. Our approach utilises a
construction akin to the ADHM construction of instantons -- what we call the
monad matrix data for calorons -- derived from the work of Charbonneau and
Hurtubise. To conclude, we present an example of how investigating these
symmetry groups can help to construct new calorons by deriving Nahm data in the
case of charge $2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Learning for Accelerated Reliability Analysis of Infrastructure Networks | Natural disasters can have catastrophic impacts on the functionality of
infrastructure systems and cause severe physical and socio-economic losses.
Given budget constraints, it is crucial to optimize decisions regarding
mitigation, preparedness, response, and recovery practices for these systems.
This requires accurate and efficient means to evaluate the infrastructure
system reliability. While numerous research efforts have addressed and
quantified the impact of natural disasters on infrastructure systems, typically
using the Monte Carlo approach, they still suffer from high computational cost
and, thus, are of limited applicability to large systems. This paper presents a
deep learning framework for accelerating infrastructure system reliability
analysis. In particular, two distinct deep neural network surrogates are
constructed and studied: (1) A classifier surrogate which speeds up the
connectivity determination of networks, and (2) An end-to-end surrogate that
replaces a number of components such as roadway status realization,
connectivity determination, and connectivity averaging. The proposed approach
is applied to a simulation-based study of the two-terminal connectivity of a
California transportation network subject to extreme probabilistic earthquake
events. Numerical results highlight the effectiveness of the proposed approach
in accelerating the transportation system two-terminal reliability analysis
with extremely high prediction accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
IMLS-SLAM: scan-to-model matching based on 3D data | The Simultaneous Localization And Mapping (SLAM) problem has been well
studied in the robotics community, especially using mono, stereo cameras or
depth sensors. 3D depth sensors, such as Velodyne LiDAR, have proved in the
last 10 years to be very useful to perceive the environment in autonomous
driving, but few methods exist that directly use these 3D data for odometry. We
present a new low-drift SLAM algorithm based only on 3D LiDAR data. Our method
relies on a scan-to-model matching framework. We first have a specific sampling
strategy based on the LiDAR scans. We then define our model as the previous
localized LiDAR sweeps and use the Implicit Moving Least Squares (IMLS) surface
representation. We show experiments with the Velodyne HDL32 with only 0.40%
drift over a 4 km acquisition without any loop closure (i.e., 16 m drift after
4 km). We tested our solution on the KITTI benchmark with a Velodyne HDL64 and
ranked among the best methods (against mono, stereo and LiDAR methods) with a
global drift of only 0.69%.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cooperative Online Learning: Keeping your Neighbors Updated | We study an asynchronous online learning setting with a network of agents. At
each time step, some of the agents are activated, requested to make a
prediction, and pay the corresponding loss. The loss function is then revealed
to these agents and also to their neighbors in the network. When activations
are stochastic, we show that the regret achieved by $N$ agents running the
standard online Mirror Descent is $O(\sqrt{\alpha T})$, where $T$ is the
horizon and $\alpha \le N$ is the independence number of the network. This is
in contrast to the regret $\Omega(\sqrt{N T})$ which $N$ agents incur in the
same setting when feedback is not shared. We also show a matching lower bound
of order $\sqrt{\alpha T}$ that holds for any given network. When the pattern
of agent activations is arbitrary, the problem changes significantly: we prove
a $\Omega(T)$ lower bound on the regret that holds for any online algorithm
oblivious to the feedback source.
| 1 | 0 | 0 | 1 | 0 | 0 |
FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis | Convolutional neural networks (CNNs) have massively impacted visual
recognition in 2D images, and are now ubiquitous in state-of-the-art
approaches. CNNs do not easily extend, however, to data that are not
represented by regular grids, such as 3D shape meshes or other graph-structured
data, to which traditional local convolution operators do not directly apply.
To address this problem, we propose a novel graph-convolution operator to
establish correspondences between filter weights and graph neighborhoods with
arbitrary connectivity. The key novelty of our approach is that these
correspondences are dynamically computed from features learned by the network,
rather than relying on predefined static coordinates over the graph as in
previous work. We obtain excellent experimental results that significantly
improve over previous state-of-the-art shape correspondence results. This shows
that our approach can learn effective shape representations from raw input
coordinates, without relying on shape descriptors.
| 1 | 0 | 0 | 0 | 0 | 0 |
On a minimal counterexample to Brauer's $k(B)$-conjecture | We study Brauer's long-standing $k(B)$-conjecture on the number of characters
in $p$-blocks for finite quasi-simple groups and show that their blocks do not
occur as a minimal counterexample for $p\ge5$ nor in the case of abelian
defect. For $p=3$ we obtain that the principal 3-blocks do not provide minimal
counterexamples. We also determine the precise number of irreducible characters
in unipotent blocks of classical groups for odd primes.
| 0 | 0 | 1 | 0 | 0 | 0 |
The best fit for the observed galaxy Counts-in-Cell distribution function | The Sloan Digital Sky Survey (SDSS) is the first dense redshift survey
encompassing a volume large enough to find the best analytic probability
density function that fits the galaxy Counts-in-Cells distribution $f_V(N)$,
the frequency distribution of galaxy counts in a volume $V$. Different analytic
functions have been previously proposed that can account for some of the
observed features of the observed frequency counts, but fail to provide an
overall good fit to this important statistical descriptor of the galaxy
large-scale distribution. Our goal is to find the probability density function
that better fits the observed Counts-in-Cells distribution $f_V(N)$. We have
made a systematic study of this function applied to several samples drawn from
the SDSS. We show the effective ways to deal with incompleteness of the sample
(masked data) in the calculation of $f_V(N)$. We use LasDamas simulations to
estimate the errors in the calculation. We test four different distribution
functions to find the best fit: the Gravitational Quasi-Equilibrium
distribution, the Negative Binomial Distribution, the Log Normal distribution
and the Log Normal Distribution including a bias parameter. In the two latter
cases, we apply a shot-noise correction to the distributions assuming the local
Poisson model. We show that the best fit for the Counts-in-Cells distribution
function is provided by the Negative Binomial distribution. In addition, at
large scales the Log Normal distribution modified with the inclusion of the
bias term also performs a satisfactory fit of the empirical values of $f_V(N)$.
Our results demonstrate that the inclusion of a bias term in the Log Normal
distribution is necessary to fit the observed galaxy Count-in-Cells
distribution function.
| 0 | 1 | 0 | 0 | 0 | 0 |
Separator Reconnection at Earth's Dayside Magnetopause: MMS Observations Compared to Global Simulations | We compare a global high resolution resistive magnetohydrodynamics (MHD)
simulation of Earth's magnetosphere with observations from the Magnetospheric
Multiscale (MMS) constellation for a southward IMF magnetopause crossing during
October 16, 2015 that was previously identified as an electron diffusion region
(EDR) event. The simulation predicts a complex time-dependent magnetic topology
consisting of multiple separators and flux ropes. Despite the topological
complexity, the predicted distance between MMS and the primary separator is
less than 0.5 Earth radii. These results suggest that global magnetic topology,
rather than local magnetic geometry alone, determines the location of the
electron diffusion region at the dayside magnetopause.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semi-Supervised Overlapping Community Finding based on Label Propagation with Pairwise Constraints | Algorithms for detecting communities in complex networks are generally
unsupervised, relying solely on the structure of the network. However, these
methods can often fail to uncover meaningful groupings that reflect the
underlying communities in the data, particularly when those structures are
highly overlapping. One way to improve the usefulness of these algorithms is by
incorporating additional background information, which can be used as a source
of constraints to direct the community detection process. In this work, we
explore the potential of semi-supervised strategies to improve algorithms for
finding overlapping communities in networks. Specifically, we propose a new
method, based on label propagation, for finding communities using a limited
number of pairwise constraints. Evaluations on synthetic and real-world
datasets demonstrate the potential of this approach for uncovering meaningful
community structures in cases where each node can potentially belong to more
than one community.
| 1 | 0 | 0 | 0 | 0 | 0 |
Designing Coalition-Proof Reverse Auctions over Continuous Goods | This paper investigates reverse auctions that involve continuous values of
different types of goods, general nonconvex constraints, and second stage
costs. We seek to design the payment rules and conditions under which
coalitions of participants cannot influence the auction outcome in order to
obtain higher collective utility. Under the incentive-compatible
Vickrey-Clarke-Groves mechanism, we show that coalition-proof outcomes are
achieved if the submitted bids are convex and the constraint sets are of a
polymatroid-type. These conditions, however, do not capture the complexity of
the general class of reverse auctions under consideration. By relaxing the
property of incentive-compatibility, we investigate further payment rules that
are coalition-proof without any extra conditions on the submitted bids and the
constraint sets. Since calculating the payments directly for these mechanisms
is computationally difficult for auctions involving many participants, we
present two computationally efficient methods. Our results are verified with
several case studies based on electricity market data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probabilistic Line Searches for Stochastic Optimization | In deterministic optimization, line searches are a standard tool ensuring
stability and efficiency. Where only stochastic gradients are available, no
direct equivalent has so far been formulated, because uncertain gradients do
not allow for a strict sequence of decisions collapsing the search space. We
construct a probabilistic line search by combining the structure of existing
deterministic methods with notions from Bayesian optimization. Our method
retains a Gaussian process surrogate of the univariate optimization objective,
and uses a probabilistic belief over the Wolfe conditions to monitor the
descent. The algorithm has very low computational cost, and no user-controlled
parameters. Experiments show that it effectively removes the need to define a
learning rate for stochastic gradient descent.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum Phase transition under pressure in a heavily hydrogen-doped iron-based superconductor LaFeAsO | Hydrogen (H)-doped LaFeAsO is a prototypical iron-based superconductor.
However, its phase diagram extends beyond the standard framework, where a
superconducting (SC) phase follows an antiferromagnetic (AF) phase upon carrier
doping; instead, the SC phase is sandwiched between two AF phases appearing in
lightly and heavily H-doped regimes. We performed nuclear magnetic resonance
(NMR) measurements under pressure, focusing on the second AF phase in the
heavily H-doped regime. The second AF phase is strongly suppressed when a
pressure of 3.0 GPa is applied, and apparently shifts to a highly H-doped
regime, thereby a "bare" quantum critical point (QCP) emerges. A quantum
critical regime emerges in a paramagnetic state near the QCP, however, the
influence of the AF critical fluctuations to the SC phase is limited in the
narrow doping regime near the QCP. The optimal SC condition ($T_c \sim$ 48 K)
is unaffected by AF fluctuations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hyperboloidal similarity coordinates and a globally stable blowup profile for supercritical wave maps | We consider co-rotational wave maps from (1+3)-dimensional Minkowski space
into the three-sphere. This model exhibits an explicit blowup solution and we
prove the asymptotic nonlinear stability of this solution in the whole space
under small perturbations of the initial data. The key ingredient is the
introduction of a novel coordinate system that allows one to track the
evolution past the blowup time and almost up to the Cauchy horizon of the
singularity. As a consequence, we also obtain a result on continuation beyond
blowup.
| 0 | 0 | 1 | 0 | 0 | 0 |
Supersymmetry in Closed Chains of Coupled Majorana Modes | We consider a closed chain of even number of Majorana zero modes with
nearest-neighbour couplings which are different site by site generically, thus
no any crystal symmetry. Instead, we demonstrate the possibility of an emergent
supersymmetry (SUSY), which is accompanied by gapless Fermionic excitations. In
particular, the condition can be easily satisfied by tuning only one coupling,
regardless of how many other couplings are there. Such a system can be realized
by four Majorana modes on two parallel Majorana nanowires with their ends
connected by Josephson junctions and bodies connected by an external
superconducting ring. By tuning the Josephson couplings with a magnetic flux
$\Phi$ through the ring, we get the gapless excitations at $\Phi_{SUSY}=\pm
f\Phi_0$ with $\Phi_0= hc/2e$, which is signaled by a zero-bias conductance
peak in tunneling conductance. We find this $f$ generally a fractional number
and oscillating with increasing Zeeman fields that parallel to the nanowires,
which provide a unique experimental signature for the existence of Majorana
modes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Renormalized Hennings Invariants and 2+1-TQFTs | We construct non-semisimple $2+1$-TQFTs yielding mapping class group
representations in Lyubashenko's spaces. In order to do this, we first
generalize Beliakova, Blanchet and Geer's logarithmic Hennings invariants based
on quantum $\mathfrak{sl}_2$ to the setting of finite-dimensional
non-degenerate unimodular ribbon Hopf algebras. The tools used for this
construction are a Hennings-augmented Reshetikhin-Turaev functor and modified
traces. When the Hopf algebra is factorizable, we further show that the
universal construction of Blanchet, Habegger, Masbaum and Vogel produces a
$2+1$-TQFT on a not completely rigid monoidal subcategory of cobordisms.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Improving Deep Reinforcement Learning for POMDPs | Deep Reinforcement Learning (RL) recently emerged as one of the most
competitive approaches for learning in sequential decision making problems with
fully observable environments, e.g., computer Go. However, very little work has
been done in deep RL to handle partially observable environments. We propose a
new architecture called Action-specific Deep Recurrent Q-Network (ADRQN) to
enhance learning performance in partially observable domains. Actions are
encoded by a fully connected layer and coupled with a convolutional observation
to form an action-observation pair. The time series of action-observation pairs
are then integrated by an LSTM layer that learns latent states based on which a
fully connected layer computes Q-values as in conventional Deep Q-Networks
(DQNs). We demonstrate the effectiveness of our new architecture in several
partially observable domains, including flickering Atari games.
| 1 | 0 | 0 | 1 | 0 | 0 |
When to Invest in Security? Empirical Evidence and a Game-Theoretic Approach for Time-Based Security | Games of timing aim to determine the optimal defense against a strategic
attacker who has the technical capability to breach a system in a stealthy
fashion. Key questions arising are when the attack takes place, and when a
defensive move should be initiated to reset the system resource to a known safe
state.
In our work, we study a more complex scenario called Time-Based Security in
which we combine three main notions: protection time, detection time, and
reaction time. Protection time represents the amount of time the attacker needs
to execute the attack successfully. In other words, protection time represents
the inherent resilience of the system against an attack. Detection time is the
required time for the defender to detect that the system is compromised.
Reaction time is the required time for the defender to reset the defense
mechanisms in order to recreate a safe system state.
In the first part of the paper, we study the VERIS Community Database (VCDB)
and screen other data sources to provide insights into the actual timing of
security incidents and responses. While we are able to derive distributions for
some of the factors regarding the timing of security breaches, we assess the
state-of-the-art regarding the collection of timing-related data as
insufficient.
In the second part of the paper, we propose a two-player game which captures
the outlined Time-Based Security scenario in which both players move according
to a periodic strategy. We carefully develop the resulting payoff functions,
and provide theorems and numerical results to help the defender to calculate
the best time to reset the defense mechanism by considering protection time,
detection time, and reaction time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Converting Cascade-Correlation Neural Nets into Probabilistic Generative Models | Humans are not only adept in recognizing what class an input instance belongs
to (i.e., classification task), but perhaps more remarkably, they can imagine
(i.e., generate) plausible instances of a desired class with ease, when
prompted. Inspired by this, we propose a framework which allows transforming
Cascade-Correlation Neural Networks (CCNNs) into probabilistic generative
models, thereby enabling CCNNs to generate samples from a category of interest.
CCNNs are a well-known class of deterministic, discriminative NNs, which
autonomously construct their topology, and have been successful in giving
accounts for a variety of psychological phenomena. Our proposed framework is
based on a Markov Chain Monte Carlo (MCMC) method, called the
Metropolis-adjusted Langevin algorithm, which capitalizes on the gradient
information of the target distribution to direct its explorations towards
regions of high probability, thereby achieving good mixing properties. Through
extensive simulations, we demonstrate the efficacy of our proposed framework.
| 1 | 0 | 0 | 1 | 0 | 0 |
Semiclassical "Divide-and-Conquer" Method for Spectroscopic Calculations of High Dimensional Molecular Systems | A new semiclassical "divide-and-conquer" method is presented with the aim of
demonstrating that quantum dynamics simulations of high dimensional molecular
systems are doable. The method is first tested by calculating the quantum
vibrational power spectra of water, methane, and benzene - three molecules of
increasing dimensionality for which benchmark quantum results are available -
and then applied to C60, a system characterized by 174 vibrational degrees of
freedom. Results show that the approach can accurately account for quantum
anharmonicities, purely quantum features like overtones, and the removal of
degeneracy when the molecular symmetry is broken.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.