title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Fast Amortized Inference and Learning in Log-linear Models with Randomly Perturbed Nearest Neighbor Search | Inference in log-linear models scales linearly in the size of output space in
the worst-case. This is often a bottleneck in natural language processing and
computer vision tasks when the output space is feasibly enumerable but very
large. We propose a method to perform inference in log-linear models with
sublinear amortized cost. Our idea hinges on using Gumbel random variable
perturbations and a pre-computed Maximum Inner Product Search data structure to
access the most-likely elements in sublinear amortized time. Our method yields
provable runtime and accuracy guarantees. Further, we present empirical
experiments on ImageNet and Word Embeddings showing significant speedups for
sampling, inference, and learning in log-linear models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Temperature Dependence of Magnetic Excitations: Terahertz Magnons above the Curie Temperature | When an ordered spin system of a given dimensionality undergoes a second
order phase transition the dependence of the order parameter i.e. magnetization
on temperature can be well-described by thermal excitations of elementary
collective spin excitations (magnons). However, the behavior of magnons
themselves, as a function of temperature and across the transition temperature
TC, is an unknown issue. Utilizing spin-polarized high resolution electron
energy loss spectroscopy we monitor the high-energy (terahertz) magnons,
excited in an ultrathin ferromagnet, as a function of temperature. We show that
the magnons' energy and lifetime decrease with temperature. The
temperature-induced renormalization of the magnons' energy and lifetime depends
on the wave vector. We provide quantitative results on the temperature-induced
damping and discuss the possible mechanism e.g., multi-magnon scattering. A
careful investigation of physical quantities determining the magnons'
propagation indicates that terahertz magnons sustain their propagating
character even at temperatures far above TC.
| 0 | 1 | 0 | 0 | 0 | 0 |
On stable solitons and interactions of the generalized Gross-Pitaevskii equation with PT-and non-PT-symmetric potentials | We report the bright solitons of the generalized Gross-Pitaevskii (GP)
equation with some types of physically relevant parity-time-(PT-) and
non-PT-symmetric potentials. We find that the constant momentum coefficient can
modulate the linear stability and complicated transverse power-flows (not
always from the gain toward loss) of nonlinear modes. However, the varying
momentum coefficient Gamma(x) can modulate both unbroken linear PT-symmetric
phases and stability of nonlinear modes. Particularly, the nonlinearity can
excite the unstable linear mode (i.e., broken linear PT-symmetric phase) to
stable nonlinear modes. Moreover, we also find stable bright solitons in the
presence of non-PT-symmetric harmonic-Gaussian potential. The interactions of
two bright solitons are also illustrated in PT-symmetric potentials. Finally,
we consider nonlinear modes and transverse power-flows in the three-dimensional
(3D) GP equation with the generalized PT-symmetric Scarf-II potential
| 0 | 1 | 1 | 0 | 0 | 0 |
Mechanical properties of borophene films: A reactive molecular dynamics investigation | The most recent experimental advances could provide ways for the fabrication
of several atomic thick and planar forms of boron atoms. For the first time, we
explore the mechanical properties of five types of boron films with various
vacancy ratios ranging from 0.1 to 0.15, using molecular dynamics simulations
with ReaxFF force field. It is found that the Young's modulus and tensile
strength decrease with increasing the temperature. We found that boron sheets
exhibit an anisotropic mechanical response due to the different arrangement of
atoms along the armchair and zigzag directions. At room temperature, 2D Young's
modulus and fracture stress of these five sheets appear in the range 63 N/m and
12 N/m, respectively. In addition, the strains at tensile strength are in the
ranges of 9, 11, and 10 percent at 1, 300, and 600 K, respectively. This
investigation not only reveals the remarkable stiffness of 2D boron, but
establishes relations between the mechanical properties of the boron sheets to
the loading direction, temperature and atomic structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stronger selection can slow down evolution driven by recombination on a smooth fitness landscape | Stronger selection implies faster evolution---that is, the greater the force,
the faster the change. This apparently self-evident proposition, however, is
derived under the assumption that genetic variation within a population is
primarily supplied by mutation (i.e.\ mutation-driven evolution). Here, we show
that this proposition does not actually hold for recombination-driven
evolution, i.e.\ evolution in which genetic variation is primarily created by
recombination rather than mutation. By numerically investigating population
genetics models of recombination, migration and selection, we demonstrate that
stronger selection can slow down evolution on a perfectly smooth fitness
landscape. Through simple analytical calculation, this apparently
counter-intuitive result is shown to stem from two opposing effects of natural
selection on the rate of evolution. On the one hand, natural selection tends to
increase the rate of evolution by increasing the fixation probability of fitter
genotypes. On the other hand, natural selection tends to decrease the rate of
evolution by decreasing the chance of recombination between immigrants and
resident individuals. As a consequence of these opposing effects, there is a
finite selection pressure maximizing the rate of evolution. Hence, stronger
selection can imply slower evolution if genetic variation is primarily supplied
by recombination.
| 0 | 1 | 0 | 0 | 0 | 0 |
Differences Among Noninformative Stopping Rules Are Often Relevant to Bayesian Decisions | L.J. Savage once hoped to show that "the superficially incompatible systems
of ideas associated on the one hand with [subjective Bayesianism] and on the
other hand with [classical statistics]...lend each other mutual support and
clarification." By 1972, however, he had largely "lost faith in the devices" of
classical statistics. One aspect of those "devices" that he found objectionable
is that differences among the "stopping rules" that are used to decide when to
end an experiment which are "noninformative" from a Bayesian perspective can
affect decisions made using a classical approach. Two experiments that produce
the same data using different stopping rules seem to differ only in the
intentions of the experimenters regarding whether or not they would have
carried on if the data had been different, which seem irrelevant to the
evidential import of the data and thus to facts about what actions the data
warrant.
I argue that classical and Bayesian ideas about stopping rules do in fact
"lend each other" the kind of "mutual support and clarification" that Savage
had originally hoped to find. They do so in a kind of case that is common in
scientific practice, in which those who design an experiment have different
interests from those who will make decisions in light of its results. I show
that, in cases of this kind, Bayesian principles provide qualified support for
the classical statistical practice of "penalizing" "biased" stopping rules.
However, they require this practice in a narrower range of circumstances than
classical principles do, and for different reasons. I argue that classical
arguments for this practice are compelling in precisely the class of cases in
which Bayesian principles also require it, and thus that we should regard
Bayesian principles as clarifying classical statistical ideas about stopping
rules rather than the reverse.
| 0 | 0 | 1 | 1 | 0 | 0 |
CNNs are Globally Optimal Given Multi-Layer Support | Stochastic Gradient Descent (SGD) is the central workhorse for training
modern CNNs. Although giving impressive empirical performance it can be slow to
converge. In this paper we explore a novel strategy for training a CNN using an
alternation strategy that offers substantial speedups during training. We make
the following contributions: (i) replace the ReLU non-linearity within a CNN
with positive hard-thresholding, (ii) reinterpret this non-linearity as a
binary state vector making the entire CNN linear if the multi-layer support is
known, and (iii) demonstrate that under certain conditions a global optima to
the CNN can be found through local descent. We then employ a novel alternation
strategy (between weights and support) for CNN training that leads to
substantially faster convergence rates, nice theoretical properties, and
achieving state of the art results across large scale datasets (e.g. ImageNet)
as well as other standard benchmarks.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Kontsevich integral for bottom tangles in handlebodies | The Kontsevich integral is a powerful link invariant, taking values in spaces
of Jacobi diagrams. In this paper, we extend the Kontsevich integral to
construct a functor on the category of bottom tangles in handlebodies. This
functor gives a universal finite type invariant of bottom tangles, and refines
a functorial version of the Le-Murakami-Ohtsuki 3-manifold invariant for
Lagrangian cobordisms of surfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Commutativity theorems for groups and semigroups | In this note we prove a selection of commutativity theorems for various
classes of semigroups. For instance, if in a separative or completely regular
semigroup $S$ we have $x^p y^p = y^p x^p$ and $x^q y^q = y^q x^q$ for all
$x,y\in S$ where $p$ and $q$ are relatively prime, then $S$ is commutative. In
a separative or inverse semigroup $S$, if there exist three consecutive
integers $i$ such that $(xy)^i = x^i y^i$ for all $x,y\in S$, then $S$ is
commutative. Finally, if $S$ is a separative or inverse semigroup satisfying
$(xy)^3=x^3y^3$ for all $x,y\in S$, and if the cubing map $x\mapsto x^3$ is
injective, then $S$ is commutative.
| 0 | 0 | 1 | 0 | 0 | 0 |
Content-based Approach for Vietnamese Spam SMS Filtering | Short Message Service (SMS) spam is a serious problem in Vietnam because of
the availability of very cheap pre-paid SMS packages. There are some systems to
detect and filter spam messages for English, most of which use machine learning
techniques to analyze the content of messages and classify them. For
Vietnamese, there is some research on spam email filtering but none focused on
SMS. In this work, we propose the first system for filtering Vietnamese spam
SMS. We first propose an appropriate preprocessing method since existing tools
for Vietnamese preprocessing cannot give good accuracy on our dataset. We then
experiment with vector representations and classifiers to find the best model
for this problem. Our system achieves an accuracy of 94% when labelling spam
messages while the misclassification rate of legitimate messages is relatively
small, about only 0.4%. This is an encouraging result compared to that of
English and can be served as a strong baseline for future development of
Vietnamese SMS spam prevention systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Measuring Software Performance on Linux | Measuring and analyzing the performance of software has reached a high
complexity, caused by more advanced processor designs and the intricate
interaction between user programs, the operating system, and the processor's
microarchitecture. In this report, we summarize our experience about how
performance characteristics of software should be measured when running on a
Linux operating system and a modern processor. In particular, (1) We provide a
general overview about hardware and operating system features that may have a
significant impact on timing and how they interact, (2) we identify sources of
errors that need to be controlled in order to obtain unbiased measurement
results, and (3) we propose a measurement setup for Linux to minimize errors.
Although not the focus of this report, we describe the measurement process
using hardware performance counters, which can faithfully reflect the real
bottlenecks on a given processor. Our experiments confirm that our measurement
setup has a large impact on the results. More surprisingly, however, they also
suggest that the setup can be negligible for certain analysis methods.
Furthermore, we found that our setup maintains significantly better performance
under background load conditions, which means it can be used to improve
software in high-performance applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
A general method for calculating lattice Green functions on the branch cut | We present a method for calculating the complex Green function $G_{ij}
(\omega)$ at any real frequency $\omega$ between any two sites $i$ and $j$ on a
lattice. Starting from numbers of walks on square, cubic, honeycomb,
triangular, bcc, fcc, and diamond lattices, we derive Chebyshev expansion
coefficients for $G_{ij} (\omega)$. The convergence of the Chebyshev series can
be accelerated by constructing functions $f(\omega)$ that mimic the van Hove
singularities in $G_{ij} (\omega)$ and subtracting their Chebyshev coefficients
from the original coefficients. We demonstrate this explicitly for the square
lattice and bcc lattice. Our algorithm achieves typical accuracies of 6--9
significant figures using 1000 series terms.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards A Novel Unified Framework for Developing Formal, Network and Validated Agent-Based Simulation Models of Complex Adaptive Systems | Literature on the modeling and simulation of complex adaptive systems (cas)
has primarily advanced vertically in different scientific domains with
scientists developing a variety of domain-specific approaches and applications.
However, while cas researchers are inher-ently interested in an
interdisciplinary comparison of models, to the best of our knowledge, there is
currently no single unified framework for facilitating the development,
comparison, communication and validation of models across different scientific
domains. In this thesis, we propose first steps towards such a unified
framework using a combination of agent-based and complex network-based modeling
approaches and guidelines formulated in the form of a set of four levels of
usage, which allow multidisciplinary researchers to adopt a suitable framework
level on the basis of available data types, their research study objectives and
expected outcomes, thus allowing them to better plan and conduct their
respective re-search case studies.
| 1 | 1 | 0 | 0 | 0 | 0 |
Recover Fine-Grained Spatial Data from Coarse Aggregation | In this paper, we study a new type of spatial sparse recovery problem, that
is to infer the fine-grained spatial distribution of certain density data in a
region only based on the aggregate observations recorded for each of its
subregions. One typical example of this spatial sparse recovery problem is to
infer spatial distribution of cellphone activities based on aggregate mobile
traffic volumes observed at sparsely scattered base stations. We propose a
novel Constrained Spatial Smoothing (CSS) approach, which exploits the local
continuity that exists in many types of spatial data to perform sparse recovery
via finite-element methods, while enforcing the aggregated observation
constraints through an innovative use of the ADMM algorithm. We also improve
the approach to further utilize additional geographical attributes. Extensive
evaluations based on a large dataset of phone call records and a demographical
dataset from the city of Milan show that our approach significantly outperforms
various state-of-the-art approaches, including Spatial Spline Regression (SSR).
| 1 | 0 | 0 | 0 | 0 | 0 |
The Power Allocation Game on Dynamic Networks: Subgame Perfection | In the game theory literature, there appears to be little research on
equilibrium selection for normal-form games with an infinite strategy space and
discontinuous utility functions. Moreover, many existing selection methods are
not applicable to games involving both cooperative and noncooperative scenarios
(e.g., "games on signed graphs"). With the purpose of equilibrium selection,
the power allocation game developed in \cite{allocation}, which is a static,
resource allocation game on signed graphs, will be reformulated into an
extensive form. Results about the subgame perfect Nash equilibria in the
extensive-form game will be given. This appears to be the first time that
subgame perfection based on time-varying graphs is used for equilibrium
selection in network games. This idea of subgame perfection proposed in the
paper may be extrapolated to other network games, which will be illustrated
with a simple example of congestion games.
| 1 | 0 | 0 | 0 | 0 | 0 |
MIMO Graph Filters for Convolutional Neural Networks | Superior performance and ease of implementation have fostered the adoption of
Convolutional Neural Networks (CNNs) for a wide array of inference and
reconstruction tasks. CNNs implement three basic blocks: convolution, pooling
and pointwise nonlinearity. Since the two first operations are well-defined
only on regular-structured data such as audio or images, application of CNNs to
contemporary datasets where the information is defined in irregular domains is
challenging. This paper investigates CNNs architectures to operate on signals
whose support can be modeled using a graph. Architectures that replace the
regular convolution with a so-called linear shift-invariant graph filter have
been recently proposed. This paper goes one step further and, under the
framework of multiple-input multiple-output (MIMO) graph filters, imposes
additional structure on the adopted graph filters, to obtain three new (more
parsimonious) architectures. The proposed architectures result in a lower
number of model parameters, reducing the computational complexity, facilitating
the training, and mitigating the risk of overfitting. Simulations show that the
proposed simpler architectures achieve similar performance as more complex
models.
| 0 | 0 | 0 | 1 | 0 | 0 |
An Edge Driven Wavelet Frame Model for Image Restoration | Wavelet frame systems are known to be effective in capturing singularities
from noisy and degraded images. In this paper, we introduce a new edge driven
wavelet frame model for image restoration by approximating images as piecewise
smooth functions. With an implicit representation of image singularities sets,
the proposed model inflicts different strength of regularization on smooth and
singular image regions and edges. The proposed edge driven model is robust to
both image approximation and singularity estimation. The implicit formulation
also enables an asymptotic analysis of the proposed models and a rigorous
connection between the discrete model and a general continuous variational
model. Finally, numerical results on image inpainting and deblurring show that
the proposed model is compared favorably against several popular image
restoration models.
| 1 | 0 | 1 | 0 | 0 | 0 |
An exact algorithm exhibiting RS-RSB/easy-hard correspondence for the maximum independent set problem | A recently proposed exact algorithm for the maximum independent set problem
is analyzed. The typical running time is improved exponentially in some
parameter regions compared to simple binary search. The algorithm also
overcomes the core transition point, where the conventional leaf removal
algorithm fails, and works up to the replica symmetry breaking (RSB) transition
point. This suggests that a leaf removal core itself is not enough for typical
hardness in the random maximum independent set problem, providing further
evidence for RSB being the obstacle for algorithms in general.
| 1 | 1 | 0 | 0 | 0 | 0 |
Flexible Support for Fast Parallel Commutative Updates | Privatizing data is a useful strategy for increasing parallelism in a shared
memory multithreaded program. Independent cores can compute independently on
duplicates of shared data, combining their results at the end of their
computations. Conventional approaches to privatization, however, rely on
explicit static or dynamic memory allocation for duplicated state, increasing
memory footprint and contention for cache resources, especially in shared
caches. In this work, we describe CCache, a system for on-demand privatization
of data manipulated by commutative operations. CCache garners the benefits of
privatization, without the increase in memory footprint or cache occupancy.
Each core in CCache dynamically privatizes commutatively manipulated data,
operating on a copy. Periodically or at the end of its computation, the core
merges its value with the value resident in memory, and when all cores have
merged, the in-memory copy contains the up-to-date value. We describe a
low-complexity architectural implementation of CCache that extends a
conventional multicore to support on-demand privatization without using
additional memory for private copies. We evaluate CCache on several high-value
applications, including random access key-value store, clustering, breadth
first search and graph ranking, showing speedups upto 3.2X.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep learning based supervised semantic segmentation of Electron Cryo-Subtomograms | Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for
the 3D visualization of cellular structure and organization at submolecular
resolution. It enables analyzing the native structures of macromolecular
complexes and their spatial organization inside single cells. However, due to
the high degree of structural complexity and practical imaging limitations,
systematic macromolecular structural recovery inside CECT images remains
challenging. Particularly, the recovery of a macromolecule is likely to be
biased by its neighbor structures due to the high molecular crowding. To reduce
the bias, here we introduce a novel 3D convolutional neural network inspired by
Fully Convolutional Network and Encoder-Decoder Architecture for the supervised
segmentation of macromolecules of interest in subtomograms. The tests of our
models on realistically simulated CECT data demonstrate that our new approach
has significantly improved segmentation performance compared to our baseline
approach. Also, we demonstrate that the proposed model has generalization
ability to segment new structures that do not exist in training data.
| 0 | 0 | 0 | 1 | 1 | 0 |
CNN-MERP: An FPGA-Based Memory-Efficient Reconfigurable Processor for Forward and Backward Propagation of Convolutional Neural Networks | Large-scale deep convolutional neural networks (CNNs) are widely used in
machine learning applications. While CNNs involve huge complexity, VLSI (ASIC
and FPGA) chips that deliver high-density integration of computational
resources are regarded as a promising platform for CNN's implementation. At
massive parallelism of computational units, however, the external memory
bandwidth, which is constrained by the pin count of the VLSI chip, becomes the
system bottleneck. Moreover, VLSI solutions are usually regarded as a lack of
the flexibility to be reconfigured for the various parameters of CNNs. This
paper presents CNN-MERP to address these issues. CNN-MERP incorporates an
efficient memory hierarchy that significantly reduces the bandwidth
requirements from multiple optimizations including on/off-chip data allocation,
data flow optimization and data reuse. The proposed 2-level reconfigurability
is utilized to enable fast and efficient reconfiguration, which is based on the
control logic and the multiboot feature of FPGA. As a result, an external
memory bandwidth requirement of 1.94MB/GFlop is achieved, which is 55% lower
than prior arts. Under limited DRAM bandwidth, a system throughput of
1244GFlop/s is achieved at the Vertex UltraScale platform, which is 5.48 times
higher than the state-of-the-art FPGA implementations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Enstrophy Cascade in Decaying Two-Dimensional Quantum Turbulence | We report evidence for an enstrophy cascade in large-scale point-vortex
simulations of decaying two-dimensional quantum turbulence. Devising a method
to generate quantum vortex configurations with kinetic energy narrowly
localized near a single length scale, the dynamics are found to be
well-characterised by a superfluid Reynolds number, $\mathrm{Re_s}$, that
depends only on the number of vortices and the initial kinetic energy scale.
Under free evolution the vortices exhibit features of a classical enstrophy
cascade, including a $k^{-3}$ power-law kinetic energy spectrum, and steady
enstrophy flux associated with inertial transport to small scales. Clear
signatures of the cascade emerge for $N\gtrsim 500$ vortices. Simulating up to
very large Reynolds numbers ($N = 32, 768$ vortices), additional features of
the classical theory are observed: the Kraichnan-Batchelor constant is found to
converge to $C' \approx 1.6$, and the width of the $k^{-3}$ range scales as
$\mathrm{Re_s}^{1/2}$. The results support a universal phenomenology
underpinning classical and quantum fluid turbulence.
| 0 | 1 | 0 | 0 | 0 | 0 |
Contextual Multi-armed Bandits under Feature Uncertainty | We study contextual multi-armed bandit problems under linear realizability on
rewards and uncertainty (or noise) on features. For the case of identical noise
on features across actions, we propose an algorithm, coined {\em NLinRel},
having $O\left(T^{\frac{7}{8}} \left(\log{(dT)}+K\sqrt{d}\right)\right)$ regret
bound for $T$ rounds, $K$ actions, and $d$-dimensional feature vectors. Next,
for the case of non-identical noise, we observe that popular linear hypotheses
including {\em NLinRel} are impossible to achieve such sub-linear regret.
Instead, under assumption of Gaussian feature vectors, we prove that a greedy
algorithm has $O\left(T^{\frac23}\sqrt{\log d}\right)$ regret bound with
respect to the optimal linear hypothesis. Utilizing our theoretical
understanding on the Gaussian case, we also design a practical variant of {\em
NLinRel}, coined {\em Universal-NLinRel}, for arbitrary feature distributions.
It first runs {\em NLinRel} for finding the `true' coefficient vector using
feature uncertainties and then adjust it to minimize its regret using the
statistical feature information. We justify the performance of {\em
Universal-NLinRel} on both synthetic and real-world datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Asymmetric Variational Autoencoders | Variational inference for latent variable models is prevalent in various
machine learning problems, typically solved by maximizing the Evidence Lower
Bound (ELBO) of the true data likelihood with respect to a variational
distribution. However, freely enriching the family of variational distribution
is challenging since the ELBO requires variational likelihood evaluations of
the latent variables. In this paper, we propose a novel framework to enrich the
variational family by incorporating auxiliary variables to the variational
family. The resulting inference network doesn't require density evaluations for
the auxiliary variables and thus complex implicit densities over the auxiliary
variables can be constructed by neural networks. It can be shown that the
actual variational posterior of the proposed approach is essentially modeling a
rich probabilistic mixture of simple variational posterior indexed by auxiliary
variables, thus a flexible inference model can be built. Empirical evaluations
on several density estimation tasks demonstrates the effectiveness of the
proposed method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Dynamics of homogeneous shear turbulence: A key role of the nonlinear transverse cascade in the bypass concept | To understand the self-sustenance of subcritical turbulence in spectrally
stable shear flows, we performed direct numerical simulations of homogeneous
shear turbulence for different aspect ratios of the flow domain and analyzed
the dynamical processes in Fourier space. There are no exponentially growing
modes in such flows and the turbulence is energetically supported only by the
linear growth of perturbation harmonics due to the shear flow non-normality.
This non-normality-induced, or nonmodal growth is anisotropic in spectral
space, which, in turn, leads to anisotropy of nonlinear processes in this
space. As a result, a transverse (angular) redistribution of harmonics in
Fourier space appears to be the main nonlinear process in these flows, rather
than direct or inverse cascades. We refer to this type of nonlinear
redistribution as the nonlinear transverse cascade. It is demonstrated that the
turbulence is sustained by a subtle interplay between the linear nonmodal
growth and the nonlinear transverse cascade that exemplifies a well-known
bypass scenario of subcritical turbulence. These two basic processes mainly
operate at large length scales, comparable to the domain size. Therefore, this
central, small wave number area of Fourier space is crucial in the
self-sustenance; we defined its size and labeled it as the vital area of
turbulence. Outside the vital area, the nonmodal growth and the transverse
cascade are of secondary importance. Although the cascades and the
self-sustaining process of turbulence are qualitatively the same at different
aspect ratios, the number of harmonics actively participating in this process
varies, but always remains quite large. This implies that the self-sustenance
of subcritical turbulence cannot be described by low-order models.
| 0 | 1 | 0 | 0 | 0 | 0 |
A taxonomy of learning dynamics in 2 x 2 games | Learning would be a convincing method to achieve coordination on an
equilibrium. But does learning converge, and to what? We answer this question
in generic 2-player, 2-strategy games, using Experience-Weighted Attraction
(EWA), which encompasses many extensively studied learning algorithms. We
exhaustively characterize the parameter space of EWA learning, for any payoff
matrix, and we understand the generic properties that imply convergent or
non-convergent behaviour in 2 x 2 games.
Irrational choice and lack of incentives imply convergence to a mixed
strategy in the centre of the strategy simplex, possibly far from the Nash
Equilibrium (NE). In the opposite limit, in which the players quickly modify
their strategies, the behaviour depends on the payoff matrix: (i) a strong
discrepancy between the pure strategies describes dominance-solvable games,
which show convergence to a unique fixed point close to the NE; (ii) a
preference towards profiles of strategies along the main diagonal describes
coordination games, with multiple stable fixed points corresponding to the NE;
(iii) a cycle of best responses defines discoordination games, which commonly
yield limit cycles or low-dimensional chaos.
While it is well known that mixed strategy equilibria may be unstable, our
approach is novel from several perspectives: we fully analyse EWA and provide
explicit thresholds that define the onset of instability; we find an emerging
taxonomy of the learning dynamics, without focusing on specific classes of
games ex-ante; we show that chaos can occur even in the simplest games; we make
a precise theoretical prediction that can be tested against data on
experimental learning of discoordination games.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamic attitude planning for trajectory tracking in underactuated VTOL UAVs | This paper addresses the trajectory tracking control problem for
underactuated VTOL UAVs. According to the different actuation mechanisms, the
most common UAV platforms can achieve only a partial decoupling of attitude and
position tasks. Since position tracking is of utmost importance for
applications involving aerial vehicles, we propose a control scheme in which
position tracking is the primary objective. To this end, this work introduces
the concept of attitude planner, a dynamical system through which the desired
attitude reference is processed to guarantee the satisfaction of the primary
objective: the attitude tracking task is considered as a secondary objective
which can be realized as long as the desired trajectory satisfies specific
trackability conditions. Two numerical simulations are performed by applying
the proposed control law to a hexacopter with and without tilted propellers,
which accounts for unmodeled dynamics and external disturbances not included in
the control design model.
| 1 | 0 | 0 | 0 | 0 | 0 |
The JCMT Transient Survey: Data Reduction and Calibration Methods | Though there has been a significant amount of work investigating the early
stages of low-mass star formation in recent years, the evolution of the mass
assembly rate onto the central protostar remains largely unconstrained.
Examining in depth the variation in this rate is critical to understanding the
physics of star formation. Instabilities in the outer and inner circumstellar
disk can lead to episodic outbursts. Observing these brightness variations at
infrared or submillimetre wavelengths sets constraints on the current accretion
models. The JCMT Transient Survey is a three-year project dedicated to studying
the continuum variability of deeply embedded protostars in eight nearby
star-forming regions at a one month cadence. We use the SCUBA-2 instrument to
simultaneously observe these regions at wavelengths of 450 $\mu$m and 850
$\mu$m. In this paper, we present the data reduction techniques, image
alignment procedures, and relative flux calibration methods for 850 $\mu$m
data. We compare the properties and locations of bright, compact emission
sources fitted with Gaussians over time. Doing so, we achieve a spatial
alignment of better than 1" between the repeated observations and an
uncertainty of 2-3\% in the relative peak brightness of significant, localised
emission. This combination of imaging performance is unprecedented in
ground-based, single dish submillimetre observations. Finally, we identify a
few sources that show possible and confirmed brightness variations. These
sources will be closely monitored and presented in further detail in additional
studies throughout the duration of the survey.
| 0 | 1 | 0 | 0 | 0 | 0 |
Synchronization Strings: Explicit Constructions, Local Decoding, and Applications | This paper gives new results for synchronization strings, a powerful
combinatorial object that allows to efficiently deal with insertions and
deletions in various communication settings:
$\bullet$ We give a deterministic, linear time synchronization string
construction, improving over an $O(n^5)$ time randomized construction.
Independently of this work, a deterministic $O(n\log^2\log n)$ time
construction was just put on arXiv by Cheng, Li, and Wu. We also give a
deterministic linear time construction of an infinite synchronization string,
which was not known to be computable before. Both constructions are highly
explicit, i.e., the $i^{th}$ symbol can be computed in $O(\log i)$ time.
$\bullet$ This paper also introduces a generalized notion we call
long-distance synchronization strings that allow for local and very fast
decoding. In particular, only $O(\log^3 n)$ time and access to logarithmically
many symbols is required to decode any index.
We give several applications for these results:
$\bullet$ For any $\delta<1$ and $\epsilon>0$ we provide an insdel correcting
code with rate $1-\delta-\epsilon$ which can correct any $O(\delta)$ fraction
of insdel errors in $O(n\log^3n)$ time. This near linear computational
efficiency is surprising given that we do not even know how to compute the
(edit) distance between the decoding input and output in sub-quadratic time. We
show that such codes can not only efficiently recover from $\delta$ fraction of
insdel errors but, similar to [Schulman, Zuckerman; TransInf'99], also from any
$O(\delta/\log n)$ fraction of block transpositions and replications.
$\bullet$ We show that highly explicitness and local decoding allow for
infinite channel simulations with exponentially smaller memory and decoding
time requirements. These simulations can be used to give the first near linear
time interactive coding scheme for insdel errors.
| 1 | 0 | 0 | 0 | 0 | 0 |
Independent Component Analysis via Energy-based and Kernel-based Mutual Dependence Measures | We apply both distance-based (Jin and Matteson, 2017) and kernel-based
(Pfister et al., 2016) mutual dependence measures to independent component
analysis (ICA), and generalize dCovICA (Matteson and Tsay, 2017) to MDMICA,
minimizing empirical dependence measures as an objective function in both
deflation and parallel manners. Solving this minimization problem, we introduce
Latin hypercube sampling (LHS) (McKay et al., 2000), and a global optimization
method, Bayesian optimization (BO) (Mockus, 1994) to improve the initialization
of the Newton-type local optimization method. The performance of MDMICA is
evaluated in various simulation studies and an image data example. When the ICA
model is correct, MDMICA achieves competitive results compared to existing
approaches. When the ICA model is misspecified, the estimated independent
components are less mutually dependent than the observed components using
MDMICA, while they are prone to be even more mutually dependent than the
observed components using other approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
The local geometry of testing in ellipses: Tight control via localized Kolmogorov widths | We study the local geometry of testing a mean vector within a
high-dimensional ellipse against a compound alternative. Given samples of a
Gaussian random vector, the goal is to distinguish whether the mean is equal to
a known vector within an ellipse, or equal to some other unknown vector in the
ellipse. Such ellipse testing problems lie at the heart of several
applications, including non-parametric goodness-of-fit testing, signal
detection in cognitive radio, and regression function testing in reproducing
kernel Hilbert spaces. While past work on such problems has focused on the
difficulty in a global sense, we study difficulty in a way that is localized to
each vector within the ellipse. Our main result is to give sharp upper and
lower bounds on the localized minimax testing radius in terms of an explicit
formula involving the Kolmogorov width of the ellipse intersected with a
Euclidean ball. When applied to particular examples, our general theorems yield
interesting rates that were not known before: as a particular case, for testing
in Sobolev ellipses of smoothness $\alpha$, we demonstrate rates that vary from
$(\sigma^2)^{\frac{4 \alpha}{4 \alpha + 1}}$, corresponding to the classical
global rate, to the faster rate $(\sigma^2)^{\frac{8
\alpha}{8 \alpha + 1}}$, achievable for vectors at favorable locations within
the ellipse. We also show that the optimal test for this problem is achieved by
a linear projection test that is based on an explicit lower-dimensional
projection of the observation vector.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Unified Optimization View on Generalized Matching Pursuit and Frank-Wolfe | Two of the most fundamental prototypes of greedy optimization are the
matching pursuit and Frank-Wolfe algorithms. In this paper, we take a unified
view on both classes of methods, leading to the first explicit convergence
rates of matching pursuit methods in an optimization sense, for general sets of
atoms. We derive sublinear ($1/t$) convergence for both classes on general
smooth objectives, and linear convergence on strongly convex objectives, as
well as a clear correspondence of algorithm variants. Our presented algorithms
and rates are affine invariant, and do not need any incoherence or sparsity
assumptions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsampled Rényi Differential Privacy and Analytical Moments Accountant | We study the problem of subsampling in differential privacy (DP), a question
that is the centerpiece behind many successful differentially private machine
learning algorithms. Specifically, we provide a tight upper bound on the
Rényi Differential Privacy (RDP) (Mironov, 2017) parameters for algorithms
that: (1) subsample the dataset, and then (2) applies a randomized mechanism M
to the subsample, in terms of the RDP parameters of M and the subsampling
probability parameter. Our results generalize the moments accounting technique,
developed by Abadi et al. (2016) for the Gaussian mechanism, to any subsampled
RDP mechanism.
| 0 | 0 | 0 | 1 | 0 | 0 |
Concurrency and Probability: Removing Confusion, Compositionally | Assigning a satisfactory truly concurrent semantics to Petri nets with
confusion and distributed decisions is a long standing problem, especially if
one wants to fully replace nondeterminism with probability distributions and no
stochastic structure is desired/allowed. Here we propose a general solution
based on a recursive, static decomposition of (finite, occurrence) nets in loci
of decision, called structural branching cells (s-cells). Each s-cell exposes a
set of alternatives, called transactions, that can be equipped with a general
probabilistic distribution. The solution is formalised as a transformation from
a given Petri net to another net whose transitions are the transactions of the
s-cells and whose places are the places of the original net, with some
auxiliary structure for bookkeeping. The resulting net is confusion-free,
namely if a transition is enabled, then all its conflicting alternatives are
also enabled. Thus sets of conflicting alternatives can be equipped with
probability distributions, while nonintersecting alternatives are purely
concurrent and do not introduce any nondeterminism: they are Church-Rosser and
their probability distributions are independent. The validity of the
construction is witnessed by a tight correspondence result with the recent
approach by Abbes and Benveniste (AB) based on recursively stopped
configurations in event structures. Some advantages of our approach over AB's
are that: i) s-cells are defined statically and locally in a compositional way,
whereas AB's branching cells are defined dynamically and globally; ii) their
recursively stopped configurations correspond to possible executions, but the
existing concurrency is not made explicit. Instead, our resulting nets are
equipped with an original concurrency structure exhibiting a so-called complete
concurrency property.
| 1 | 0 | 0 | 0 | 0 | 0 |
Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration | We propose a technique for multi-task learning from demonstration that trains
the controller of a low-cost robotic arm to accomplish several complex picking
and placing tasks, as well as non-prehensile manipulation. The controller is a
recurrent neural network using raw images as input and generating robot arm
trajectories, with the parameters shared across the tasks. The controller also
combines VAE-GAN-based reconstruction with autoregressive multimodal action
prediction. Our results demonstrate that it is possible to learn complex
manipulation tasks, such as picking up a towel, wiping an object, and
depositing the towel to its previous position, entirely from raw images with
direct behavior cloning. We show that weight sharing and reconstruction-based
regularization substantially improve generalization and robustness, and
training on multiple tasks simultaneously increases the success rate on all
tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic Word Embeddings | We present a probabilistic language model for time-stamped text data which
tracks the semantic evolution of individual words over time. The model
represents words and contexts by latent trajectories in an embedding space. At
each moment in time, the embedding vectors are inferred from a probabilistic
version of word2vec [Mikolov et al., 2013]. These embedding vectors are
connected in time through a latent diffusion process. We describe two scalable
variational inference algorithms--skip-gram smoothing and skip-gram
filtering--that allow us to train the model jointly over all times; thus
learning on all data while simultaneously allowing word and context vectors to
drift. Experimental results on three different corpora demonstrate that our
dynamic model infers word embedding trajectories that are more interpretable
and lead to higher predictive likelihoods than competing methods that are based
on static models trained separately on time slices.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multiple scattering effect on angular distribution and polarization of radiation by relativistic electrons in a thin crystal | The multiple scattering of ultra relativistic electrons in an amorphous
matter leads to the suppression of the soft part of radiation spectrum (the
Landau-Pomeranchuk-Migdal effect), and also can change essentially the angular
distribution of the emitted photons. A similar effect must take place in a
crystal for the coherent radiation of relativistic electron. The results of the
theoretical investigation of angular distributions and polarization of
radiation by a relativistic electron passing through a thin (in comparison with
a coherence length) crystal at a small angle to the crystal axis are presented.
The electron trajectories in crystal were simulated using the binary collision
model which takes into account both coherent and incoherent effects at
scattering. The angular distribution of radiation and polarization were
calculated as a sum of radiation from each electron. It is shown that there are
nontrivial angular distributions of the emitted photons and their polarization
that are connected to the superposition of the coherent scattering of electrons
by atomic rows ("doughnut scattering" effect) and the suppression of radiation
(similar to the Landau-Pomeranchuk-Migdal effect in an amorphous matter). It is
also shown that circular polarization of radiation in the considered case is
identically zero.
| 0 | 1 | 0 | 0 | 0 | 0 |
Branched coverings of $CP^2$ and other basic 4-manifolds | We give necessary and sufficient conditions for a 4-manifold to be a branched
covering of $CP^2$, $S^2\times S^2$, $S^2 \mathbin{\tilde\times} S^2$ and $S^3
\times S^1$, which are expressed in terms of the Betti numbers and the
intersection form of the 4-manifold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Instantaneous effects of photons on electrons in semiconductors | The photoelectric effect established by Einstein is well known, which
indicates that electrons on lower energy levels can jump up to higher levels by
absorbing photons, or jump down from higher levels to lower levels and give out
photons1-3. However, how do photons act on electrons and further on atoms have
kept unknown up to now. Here we show the results that photons collide on
electrons with energy-transmission in semiconductors and pass their momenta to
electrons, which make the electrons jump up from lower energy levels to higher
levels. We found that (i) photons have rest mass of 7.287exp(-38) kg and
2.886exp(-35) kg, in vacuum and silicon respectively; (ii) excited by photons
with energy of 1.12eV, electrons in silicon may jump up from the top of valance
band to the bottom of conduction band with initial speed of 2.543exp(3) m/s and
taking time of 4.977exp(-17) s; (iii) acted by photons with energy of 4.6eV,
the atoms who lose electrons may be catapulted out of the semiconductors by the
extruded neighbor atoms, and taking time of 2.224exp(-15) s. These results make
reasonable explanation to rapid thermal annealing, laser ablation and laser
cutting.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mitigating the Impact of Speech Recognition Errors on Chatbot using Sequence-to-Sequence Model | We apply sequence-to-sequence model to mitigate the impact of speech
recognition errors on open domain end-to-end dialog generation. We cast the
task as a domain adaptation problem where ASR transcriptions and original text
are in two different domains. In this paper, our proposed model includes two
individual encoders for each domain data and make their hidden states similar
to ensure the decoder predict the same dialog text. The method shows that the
sequence-to-sequence model can learn the ASR transcriptions and original text
pair having the same meaning and eliminate the speech recognition errors.
Experimental results on Cornell movie dialog dataset demonstrate that the
domain adaption system help the spoken dialog system generate more similar
responses with the original text answers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simultaneous 183 GHz H2O Maser and SiO Observations Towards Evolved Stars Using APEX SEPIA Band 5 | We investigate the use of 183 GHz H2O masers for characterization of the
physical conditions and mass loss process in the circumstellar envelopes of
evolved stars. We used APEX SEPIA Band 5 to observe the 183 GHz H2O line
towards 2 Red Supergiant and 3 Asymptotic Giant Branch stars. Simultaneously,
we observed lines in 28SiO v0, 1, 2 and 3, and for 29SiO v0 and 1. We detected
the 183 GHz H2O line towards all the stars with peak flux densities greater
than 100 Jy, including a new detection from VY CMa. Towards all 5 targets, the
water line had indications of being due to maser emission and had higher peak
flux densities than for the SiO lines. The SiO lines appear to originate from
both thermal and maser processes. Comparison with simulations and models
indicate that 183 GHz maser emission is likely to extend to greater radii in
the circumstellar envelopes than SiO maser emission and to similar or greater
radii than water masers at 22, 321 and 325 GHz. We speculate that a prominent
blue-shifted feature in the W Hya 183 GHz spectrum is amplifying the stellar
continuum, and is located at a similar distance from the star as mainline OH
maser emission. From a comparison of the individual polarizations, we find that
the SiO maser linear polarization fraction of several features exceeds the
maximum fraction allowed under standard maser assumptions and requires strong
anisotropic pumping of the maser transition and strongly saturated maser
emission. The low polarization fraction of the H2O maser however, fits with the
expectation for a non-saturated maser. 183 GHz H2O masers can provide strong
probes of the mass loss process of evolved stars. Higher angular resolution
observations of this line using ALMA Band 5 will enable detailed investigation
of the emission location in circumstellar envelopes and can also provide
information on magnetic field strength and structure.
| 0 | 1 | 0 | 0 | 0 | 0 |
What does the free energy principle tell us about the brain? | The free energy principle has been proposed as a unifying theory of brain
function. It is closely related, and in some cases subsumes, earlier unifying
ideas such as Bayesian inference, predictive coding, and active learning. This
article clarifies these connections, teasing apart distinctive and shared
predictions.
| 0 | 0 | 0 | 0 | 1 | 0 |
Learning with Changing Features | In this paper we study the setting where features are added or change
interpretation over time, which has applications in multiple domains such as
retail, manufacturing, finance. In particular, we propose an approach to
provably determine the time instant from which the new/changed features start
becoming relevant with respect to an output variable in an agnostic
(supervised) learning setting. We also suggest an efficient version of our
approach which has the same asymptotic performance. Moreover, our theory also
applies when we have more than one such change point. Independent post analysis
of a change point identified by our method for a large retailer revealed that
it corresponded in time with certain unflattering news stories about a brand
that resulted in the change in customer behavior. We also applied our method to
data from an advanced manufacturing plant identifying the time instant from
which downstream features became relevant. To the best of our knowledge this is
the first work that formally studies change point detection in a distribution
independent agnostic setting, where the change point is based on the changing
relationship between input and output.
| 1 | 0 | 0 | 1 | 0 | 0 |
Estimation Considerations in Contextual Bandits | Contextual bandit algorithms are sensitive to the estimation method of the
outcome model as well as the exploration method used, particularly in the
presence of rich heterogeneity or complex outcome models, which can lead to
difficult estimation problems along the path of learning. We study a
consideration for the exploration vs. exploitation framework that does not
arise in multi-armed bandits but is crucial in contextual bandits; the way
exploration and exploitation is conducted in the present affects the bias and
variance in the potential outcome model estimation in subsequent stages of
learning. We develop parametric and non-parametric contextual bandits that
integrate balancing methods from the causal inference literature in their
estimation to make it less prone to problems of estimation bias. We provide the
first regret bound analyses for contextual bandits with balancing in the domain
of linear contextual bandits that match the state of the art regret bounds. We
demonstrate the strong practical advantage of balanced contextual bandits on a
large number of supervised learning datasets and on a synthetic example that
simulates model mis-specification and prejudice in the initial training data.
Additionally, we develop contextual bandits with simpler assignment policies by
leveraging sparse model estimation methods from the econometrics literature and
demonstrate empirically that in the early stages they can improve the rate of
learning and decrease regret.
| 1 | 0 | 0 | 1 | 0 | 0 |
Using solar and load predictions in battery scheduling at the residential level | Smart solar inverters can be used to store, monitor and manage a home's solar
energy. We describe a smart solar inverter system with battery which can either
operate in an automatic mode or receive commands over a network to charge and
discharge at a given rate. In order to make battery storage financially viable
and advantageous to the consumers, effective battery scheduling algorithms can
be employed. Particularly, when time-of-use tariffs are in effect in the region
of the inverter, it is possible in some cases to schedule the battery to save
money for the individual customer, compared to the "automatic" mode. Hence,
this paper presents and evaluates the performance of a novel battery scheduling
algorithm for residential consumers of solar energy. The proposed battery
scheduling algorithm optimizes the cost of electricity over next 24 hours for
residential consumers. The cost minimization is realized by controlling the
charging/discharging of battery storage system based on the predictions for
load and solar power generation values. The scheduling problem is formulated as
a linear programming problem. We performed computer simulations over 83
inverters using several months of hourly load and PV data. The simulation
results indicate that key factors affecting the viability of optimization are
the tariffs and the PV to Load ratio at each inverter. Depending on the tariff,
savings of between 1% and 10% can be expected over the automatic approach. The
prediction approach used in this paper is also shown to out-perform basic
"persistence" forecasting approaches. We have also examined the approaches for
improving the prediction accuracy and optimization effectiveness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Large-Pose Face Frontalization in the Wild | Despite recent advances in face recognition using deep learning, severe
accuracy drops are observed for large pose variations in unconstrained
environments. Learning pose-invariant features is one solution, but needs
expensively labeled large-scale data and carefully designed feature learning
algorithms. In this work, we focus on frontalizing faces in the wild under
various head poses, including extreme profile views. We propose a novel deep 3D
Morphable Model (3DMM) conditioned Face Frontalization Generative Adversarial
Network (GAN), termed as FF-GAN, to generate neutral head pose face images. Our
framework differs from both traditional GANs and 3DMM based modeling.
Incorporating 3DMM into the GAN structure provides shape and appearance priors
for fast convergence with less training data, while also supporting end-to-end
training. The 3DMM-conditioned GAN employs not only the discriminator and
generator loss but also a new masked symmetry loss to retain visual quality
under occlusions, besides an identity loss to recover high frequency
information. Experiments on face recognition, landmark localization and 3D
reconstruction consistently show the advantage of our frontalization method on
faces in the wild datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Managing the Public to Manage Data: Citizen Science and Astronomy | Citizen science projects recruit members of the public as volunteers to
process and produce datasets. These datasets must win the trust of the
scientific community. The task of securing credibility involves, in part,
applying standard scientific procedures to clean these datasets. However,
effective management of volunteer behavior also makes a significant
contribution to enhancing data quality. Through a case study of Galaxy Zoo, a
citizen science project set up to generate datasets based on volunteer
classifications of galaxy morphologies, this paper explores how those involved
in running the project manage volunteers. The paper focuses on how methods for
crediting volunteer contributions motivate volunteers to provide higher quality
contributions and to behave in a way that better corresponds to statistical
assumptions made when combining volunteer contributions into datasets. These
methods have made a significant contribution to the success of the project in
securing trust in these datasets, which have been well used by other
scientists. Implications for practice are then presented for citizen science
projects, providing a list of considerations to guide choices regarding how to
credit volunteer contributions to improve the quality and trustworthiness of
citizen science-produced datasets.
| 1 | 1 | 0 | 0 | 0 | 0 |
Modular representations in type A with a two-row nilpotent central character | We study the category of representations of $\mathfrak{sl}_{m+2n}$ in
positive characteristic, whose p-character is a nilpotent whose Jordan type is
the two-row partition (m+n,n). In a previous paper with Anno, we used
Bezrukavnikov-Mirkovic-Rumynin's theory of positive characteristic localization
and exotic t-structures to give a geometric parametrization of the simples
using annular crossingless matchings. Building on this, here we give
combinatorial dimension formulae for the simple objects, and compute the
Jordan-Holder multiplicities of the simples inside the baby Vermas (in special
case where n=1, i.e. that a subregular nilpotent, these were known from work of
Jantzen). We use Cautis-Kamnitzer's geometric categorification of the tangle
calculus to study the images of the simple objects under the [BMR] equivalence.
The dimension formulae may be viewed as a positive characteristic analogue of
the combinatorial character formulae for simple objects in parabolic category O
for $\mathfrak{sl}_{m+2n}$, due to Lascoux and Schutzenberger.
| 0 | 0 | 1 | 0 | 0 | 0 |
Knowledge Acquisition: A Complex Networks Approach | Complex networks have been found to provide a good representation of the
structure of knowledge, as understood in terms of discoverable concepts and
their relationships. In this context, the discovery process can be modeled as
agents walking in a knowledge space. Recent studies proposed more realistic
dynamics, including the possibility of agents being influenced by others with
higher visibility or by their own memory. However, rather than dealing with
these two concepts separately, as previously approached, in this study we
propose a multi-agent random walk model for knowledge acquisition that
incorporates both concepts. More specifically, we employed the true self
avoiding walk alongside a new dynamics based on jumps, in which agents are
attracted by the influence of others. That was achieved by using a Lévy
flight influenced by a field of attraction emanating from the agents. In order
to evaluate our approach, we use a set of network models and two real networks,
one generated from Wikipedia and another from the Web of Science. The results
were analyzed globally and by regions. In the global analysis, we found that
most of the dynamics parameters do not significantly affect the discovery
dynamics. The local analysis revealed a substantial difference of performance
depending on the network regions where the dynamics are occurring. In
particular, the dynamics at the core of networks tend to be more effective. The
choice of the dynamics parameters also had no significant impact to the
acquisition performance for the considered knowledge networks, even at the
local scale.
| 1 | 1 | 0 | 0 | 0 | 0 |
Frequent flaring in the TRAPPIST-1 system - unsuited for life? | We analyze short cadence K2 light curve of the TRAPPIST-1 system. Fourier
analysis of the data suggests $P_\mathrm{rot}=3.295\pm0.003$ days. The light
curve shows several flares, of which we analyzed 42 events, these have
integrated flare energies of $1.26\times10^{30}-1.24\times10^{33}$ ergs.
Approximately 12% of the flares were complex, multi-peaked eruptions. The
flaring and the possible rotational modulation shows no obvious correlation.
The flaring activity of TRAPPIST-1 probably continuously alters the atmospheres
of the orbiting exoplanets, making these less favorable for hosting life.
| 0 | 1 | 0 | 0 | 0 | 0 |
Playing Pairs with Pepper | As robots become increasingly prevalent in almost all areas of society, the
factors affecting humans trust in those robots becomes increasingly important.
This paper is intended to investigate the factor of robot attributes, looking
specifically at the relationship between anthropomorphism and human development
of trust. To achieve this, an interaction game, Matching the Pairs, was
designed and implemented on two robots of varying levels of anthropomorphism,
Pepper and Husky. Participants completed both pre- and post-test questionnaires
that were compared and analyzed predominantly with the use of quantitative
methods, such as paired sample t-tests. Post-test analyses suggested a positive
relationship between trust and anthropomorphism with $80\%$ of participants
confirming that the robots' adoption of facial features assisted in
establishing trust. The results also indicated a positive relationship between
interaction and trust with $90\%$ of participants confirming this for both
robots post-test
| 1 | 0 | 0 | 0 | 0 | 0 |
Wild theories with o-minimal open core | Let $T$ be a consistent o-minimal theory extending the theory of densely
ordered groups and let $T'$ be a consistent theory. Then there is a complete
theory $T^*$ extending $T$ such that $T$ is an open core of $T^*$, but every
model of $T^*$ interprets a model of $T'$. If $T'$ is NIP, $T^*$ can be chosen
to be NIP as well. From this we deduce the existence of an NIP expansion of the
real field that has no distal expansion.
| 0 | 0 | 1 | 0 | 0 | 0 |
Objective Bayesian inference with proper scoring rules | Standard Bayesian analyses can be difficult to perform when the full
likelihood, and consequently the full posterior distribution, is too complex
and difficult to specify or if robustness with respect to data or to model
misspecifications is required. In these situations, we suggest to resort to a
posterior distribution for the parameter of interest based on proper scoring
rules. Scoring rules are loss functions designed to measure the quality of a
probability distribution for a random variable, given its observed value.
Important examples are the Tsallis score and the Hyvärinen score, which allow
us to deal with model misspecifications or with complex models. Also the full
and the composite likelihoods are both special instances of scoring rules.
The aim of this paper is twofold. Firstly, we discuss the use of scoring
rules in the Bayes formula in order to compute a posterior distribution, named
SR-posterior distribution, and we derive its asymptotic normality. Secondly, we
propose a procedure for building default priors for the unknown parameter of
interest that can be used to update the information provided by the scoring
rule in the SR-posterior distribution. In particular, a reference prior is
obtained by maximizing the average $\alpha-$divergence from the SR-posterior
distribution. For $0 \leq |\alpha|<1$, the result is a Jeffreys-type prior that
is proportional to the square root of the determinant of the Godambe
information matrix associated to the scoring rule. Some examples are discussed.
| 0 | 0 | 1 | 1 | 0 | 0 |
Large Area X-ray Proportional Counter (LAXPC) Instrument on AstroSat | Large Area X-ray Proportional Counter (LAXPC) is one of the major AstroSat
payloads. LAXPC instrument will provide high time resolution X-ray observations
in 3 to 80 keV energy band with moderate energy resolution. A cluster of three
co-aligned identical LAXPC detectors is used in AstroSat to provide large
collection area of more than 6000 cm2 . The large detection volume (15 cm
depth) filled with xenon gas at about 2 atmosphere pressure, results in
detection efficiency greater than 50%, above 30 keV. With its broad energy
range and fine time resolution (10 microsecond), LAXPC instrument is well
suited for timing and spectral studies of a wide variety of known and transient
X-ray sources in the sky. We have done extensive calibration of all LAXPC
detectors using radioactive sources as well as GEANT4 simulation of LAXPC
detectors. We describe in brief some of the results obtained during the payload
verification phase along with LXAPC capabilities.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Counterexample to the Vector Generalization of Costa's EPI, and Partial Resolution | We give a counterexample to the vector generalization of Costa's entropy
power inequality (EPI) due to Liu, Liu, Poor and Shamai. In particular, the
claimed inequality can fail if the matix-valued parameter in the convex
combination does not commute with the covariance of the additive Gaussian
noise. Conversely, the inequality holds if these two matrices commute.
| 1 | 0 | 0 | 0 | 0 | 0 |
Models for Predicting Community-Specific Interest in News Articles | In this work, we ask two questions: 1. Can we predict the type of community
interested in a news article using only features from the article content? and
2. How well do these models generalize over time? To answer these questions, we
compute well-studied content-based features on over 60K news articles from 4
communities on reddit.com. We train and test models over three different time
periods between 2015 and 2017 to demonstrate which features degrade in
performance the most due to concept drift. Our models can classify news
articles into communities with high accuracy, ranging from 0.81 ROC AUC to 1.0
ROC AUC. However, while we can predict the community-specific popularity of
news articles with high accuracy, practitioners should approach these models
carefully. Predictions are both community-pair dependent and feature group
dependent. Moreover, these feature groups generalize over time differently,
with some only degrading slightly over time, but others degrading greatly.
Therefore, we recommend that community-interest predictions are done in a
hierarchical structure, where multiple binary classifiers can be used to
separate community pairs, rather than a traditional multi-class model. Second,
these models should be retrained over time based on accuracy goals and the
availability of training data.
| 0 | 0 | 0 | 1 | 0 | 0 |
On subfiniteness of graded linear series | Hilbert's 14th problem studies the finite generation property of the
intersection of an integral algebra of finite type with a subfield of the field
of fractions of the algebra. It has a negative answer due to the counterexample
of Nagata. We show that a subfinite version of Hilbert's 14th problem has a
confirmative answer. We then establish a graded analogue of this result, which
permits to show that the subfiniteness of graded linear series does not depend
on the function field in which we consider it. Finally, we apply the
subfiniteness result to the study of geometric and arithmetic graded linear
series.
| 0 | 0 | 1 | 0 | 0 | 0 |
Natasha 2: Faster Non-Convex Optimization Than SGD | We design a stochastic algorithm to train any smooth neural network to
$\varepsilon$-approximate local minima, using $O(\varepsilon^{-3.25})$
backpropagations. The best result was essentially $O(\varepsilon^{-4})$ by SGD.
More broadly, it finds $\varepsilon$-approximate local minima of any smooth
nonconvex function in rate $O(\varepsilon^{-3.25})$, with only oracle access to
stochastic gradients.
| 1 | 0 | 0 | 1 | 0 | 0 |
Evidence for a Dayside Thermal Inversion and High Metallicity for the Hot Jupiter WASP-18b | We find evidence for a strong thermal inversion in the dayside atmosphere of
the highly irradiated hot Jupiter WASP-18b (T$_{eq}=2411K$, $M=10.3M_{J}$)
based on emission spectroscopy from Hubble Space Telescope secondary eclipse
observations and Spitzer eclipse photometry. We demonstrate a lack of water
vapor in either absorption or emission at 1.4$\mu$m. However, we infer emission
at 4.5$\mu$m and absorption at 1.6$\mu$m that we attribute to CO, as well as a
non-detection of all other relevant species (e.g., TiO, VO). The most probable
atmospheric retrieval solution indicates a C/O ratio of 1 and a high
metallicity (C/H=$283^{+395}_{-138}\times$ solar). The derived composition and
T/P profile suggest that WASP-18b is the first example of both a planet with a
non-oxide driven thermal inversion and a planet with an atmospheric metallicity
inconsistent with that predicted for Jupiter-mass planets at $>2\sigma$. Future
observations are necessary to confirm the unusual planetary properties implied
by these results.
| 0 | 1 | 0 | 0 | 0 | 0 |
$α$-$β$ and $β$-$γ$ phase boundaries of solid oxygen observed by adiabatic magnetocaloric effect | The magnetic-field-temperature phase diagram of solid oxygen is investigated
by the adiabatic magnetocaloric effect (MCE) measurement with pulsed magnetic
fields. Relatively large temperature decrease with hysteresis is observed at
just below the $\beta$-$\gamma$ and $\alpha$-$\beta$ phase transition
temperatures owing to the field-induced transitions. The magnetic field
dependences of these phase boundaries are obtained as
$T_\mathrm{\beta\gamma}(H)=43.8-1.55\times10^{-3}H^2$ K and
$T_\mathrm{\alpha\beta}(H)=23.9-0.73\times10^{-3}H^2$ K. The magnetic
Clausius-Clapeyron equation quantitatively explains the $H$ dependence of
$T_\mathrm{\beta\gamma}$, meanwhile, does not $T_\mathrm{\alpha\beta}$. The MCE
curve at $T_\mathrm{\beta\gamma}$ is of typical first-order, while the curve at
$T_\mathrm{\alpha\beta}$ seems to have both characteristics of first- and
second-order transitions. We discuss the order of the $\alpha$-$\beta$ phase
transition and propose possible reasons for the unusual behavior.
| 0 | 1 | 0 | 0 | 0 | 0 |
Localization Algorithm with Circular Representation in 2D and its Similarity to Mammalian Brains | Extended Kalman filter (EKF) does not guarantee consistent mean and
covariance under linearization, even though it is the main framework for
robotic localization. While Lie group improves the modeling of the state space
in localization, the EKF on Lie group still relies on the arbitrary Gaussian
assumption in face of nonlinear models. We instead use von Mises filter for
orientation estimation together with the conventional Kalman filter for
position estimation, and thus we are able to characterize the first two moments
of the state estimates. Since the proposed algorithm holds a solid
probabilistic basis, it is fundamentally relieved from the inconsistency
problem. Furthermore, we extend the localization algorithm to fully circular
representation even for position, which is similar to grid patterns found in
mammalian brains and in recurrent neural networks. The applicability of the
proposed algorithms is substantiated not only by strong mathematical foundation
but also by the comparison against other common localization methods.
| 1 | 0 | 0 | 0 | 1 | 0 |
Lusin-type approximation of Sobolev by Lipschitz functions, in Gaussian and $RCD(K,\infty)$ spaces | We establish new approximation results, in the sense of Lusin, of Sobolev
functions by Lipschitz ones, in some classes of non-doubling metric measure
structures. Our proof technique relies upon estimates for heat semigroups and
applies to Gaussian and $RCD(K, \infty)$ spaces. As a consequence, we obtain
quantitative stability for regular Lagrangian flows in Gaussian settings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Distributed Coordination for a Class of Nonlinear Multi-agent Systems with Regulation Constraints | In this paper, a multi-agent coordination problem with steady-state
regulation constraints is investigated for a class of nonlinear systems. Unlike
existing leader-following coordination formulations, the reference signal is
not given by a dynamic autonomous leader but determined as the optimal solution
of a distributed optimization problem. Furthermore, we consider a global
constraint having noisy data observations for the optimization problem, which
implies that reference signal is not trivially available with existing
optimization algorithms. To handle those challenges, we present a
passivity-based analysis and design approach by using only local objective
function, local data observation and exchanged information from their
neighbors. The proposed distributed algorithms are shown to achieve the optimal
steady-state regulation by rejecting the unknown observation disturbances for
passive nonlinear agents, which are persuasive in various practical problems.
Applications and simulation examples are then given to verify the effectiveness
of our design.
| 1 | 0 | 1 | 0 | 0 | 0 |
Intermodulation distortion of actuated MEMS capacitive switches | For the first time, intermodulation distortion of micro-electromechanical
capacitive switches in the actuated state was analyzed both theoretically and
experimentally. The distortion, although higher than that of switches in the
suspended state, was found to decrease with increasing bias voltage but to
depend weakly on modulation frequencies between 55 kHz and 1.1 MHz. This
dependence could be explained by the orders-of-magnitude increase of the spring
constant when the switches were actuated. Additionally, the analysis suggested
that increasing the spring constant and decreasing the contact roughness could
improve the linearity of actuated switches. These results are critical to
micro-electromechanical capacitive switches used in tuners, filters, phase
shifters, etc. where the linearity of both suspended and actuated states are
critical.
| 0 | 1 | 0 | 0 | 0 | 0 |
Parasitic Bipolar Leakage in III-V FETs: Impact of Substrate Architecture | InGaAs-based Gate-all-Around (GAA) FETs with moderate to high In content are
shown experimentally and theoretically to be unsuitable for low-leakage
advanced CMOS nodes. The primary cause for this is the large leakage penalty
induced by the Parasitic Bipolar Effect (PBE), which is seen to be particularly
difficult to remedy in GAA architectures. Experimental evidence of PBE in
In70Ga30As GAA FETs is demonstrated, along with a simulation-based analysis of
the PBE behavior. The impact of PBE is investigated by simulation for
alternative device architectures, such as bulk FinFETs and
FinFETs-on-insulator. PBE is found to be non-negligible in all standard InGaAs
FET designs. Practical PBE metrics are introduced and the design of a substrate
architecture for PBE suppression is elucidated. Finally, it is concluded that
the GAA architecture is not suitable for low-leakage InGaAs FETs; a bulk FinFET
is better suited for the role.
| 0 | 1 | 0 | 0 | 0 | 0 |
Properties of Ultra Gamma Function | In this paper we study the integral of type
\[_{\delta,a}\Gamma_{\rho,b}(x)
=\Gamma(\delta,a;\rho,b)(x)=\int_{0}^{\infty}t^{x-1}e^{-\frac{t^{\delta}}{a}-\frac{t^{-\rho}}{b}}dt.\]
Different authors called this integral by different names like ultra gamma
function, generalized gamma function, Kratzel integral, inverse Gaussian
integral, reaction-rate probability integral, Bessel integral etc. We prove
several identities and recurrence relation of above said integral, we called
this integral as Four Parameter Gamma Function. Also we evaluate relation
between Four Parameter Gamma Function, p-k Gamma Function and Classical Gamma
Function. With some conditions we can evaluate Four Parameter Gamma Function in
term of Hypergeometric function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Further remarks on liftings of crossed modules | In this paper we define the notion of pullback lifting of a lifting crossed
module over a crossed module morphism and interpret this notion in the category
of group-groupoid actions as pullback action. Moreover, we give a criterion for
the lifting of homotopic crossed module morphisms to be homotopic, which will
be called homotopy lifting property for crossed module morphisms. Finally, we
investigate some properties of derivations of lifting crossed modules according
to base crossed module derivations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Submodular Maximization through the Lens of Linear Programming | The simplex algorithm for linear programming is based on the fact that any
local optimum with respect to the polyhedral neighborhood is also a global
optimum. We show that a similar result carries over to submodular maximization.
In particular, every local optimum of a constrained monotone submodular
maximization problem yields a $1/2$-approximation, and we also present an
appropriate extension to the non-monotone setting. However, reaching a local
optimum quickly is a non-trivial task. Moreover, we describe a fast and very
general local search procedure that applies to a wide range of constraint
families, and unifies as well as extends previous methods. In our framework, we
match known approximation guarantees while disentangling and simplifying
previous approaches. Moreover, despite its generality, we are able to show that
our local search procedure is slightly faster than previous specialized
methods. Furthermore, we resolve an open question on the relation between
linear optimization and submodular maximization; namely, whether a linear
optimization oracle may be enough to obtain strong approximation algorithms for
submodular maximization. We show that this is not the case by providing an
example of a constraint family on a ground set of size $n$ for which, if only
given a linear optimization oracle, any algorithm for submodular maximization
with a polynomial number of calls to the linear optimization oracle will have
an approximation ratio of only $O ( \frac{1}{\sqrt{n}} \cdot \frac{\log
n}{\log\log n} )$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Channel Estimation for Diffusive MIMO Molecular Communications | In diffusion-based communication, as for molecular systems, the achievable
data rate is very low due to the slow nature of diffusion and the existence of
severe inter-symbol interference (ISI). Multiple-input multiple-output (MIMO)
technique can be used to improve the data rate. Knowledge of channel impulse
response (CIR) is essential for equalization and detection in MIMO systems.
This paper presents a training-based CIR estimation for diffusive MIMO (D-MIMO)
channels. Maximum likelihood and least-squares estimators are derived, and the
training sequences are designed to minimize the corresponding Cramér-Rao
bound. Sub-optimal estimators are compared to Cramér-Rao bound to validate
their performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multi-stage splitting integrators for sampling with modified Hamiltonian Monte Carlo methods | Modified Hamiltonian Monte Carlo (MHMC) methods combine the ideas behind two
popular sampling approaches: Hamiltonian Monte Carlo (HMC) and importance
sampling. As in the HMC case, the bulk of the computational cost of MHMC
algorithms lies in the numerical integration of a Hamiltonian system of
differential equations. We suggest novel integrators designed to enhance
accuracy and sampling performance of MHMC methods. The novel integrators belong
to families of splitting algorithms and are therefore easily implemented. We
identify optimal integrators within the families by minimizing the energy error
or the average energy error. We derive and discuss in detail the modified
Hamiltonians of the new integrators, as the evaluation of those Hamiltonians is
key to the efficiency of the overall algorithms. Numerical experiments show
that the use of the new integrators may improve very significantly the sampling
performance of MHMC methods, in both statistical and molecular dynamics
problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proposal for a High Precision Tensor Processing Unit | This whitepaper proposes the design and adoption of a new generation of
Tensor Processing Unit which has the performance of Google's TPU, yet performs
operations on wide precision data. The new generation TPU is made possible by
implementing arithmetic circuits which compute using a new general purpose,
fractional arithmetic based on the residue number system.
| 1 | 0 | 0 | 0 | 0 | 0 |
DICOD: Distributed Convolutional Sparse Coding | In this paper, we introduce DICOD, a convolutional sparse coding algorithm
which builds shift invariant representations for long signals. This algorithm
is designed to run in a distributed setting, with local message passing, making
it communication efficient. It is based on coordinate descent and uses locally
greedy updates which accelerate the resolution compared to greedy coordinate
selection. We prove the convergence of this algorithm and highlight its
computational speed-up which is super-linear in the number of cores used. We
also provide empirical evidence for the acceleration properties of our
algorithm compared to state-of-the-art methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
On uniqueness results for Dirichlet problems of elliptic systems without DeGiorgi-Nash-Moser regularity | We study uniqueness of Dirichlet problems of second order divergence-form
elliptic systems with transversally independent coefficients on the upper
half-space in absence of regularity of solutions. To this end, we develop a
substitute for the fundamental solution used to invert elliptic operators on
the whole space by means of a representation via abstract single layer
potentials. We also show that such layer potentials are uniquely determined.
| 0 | 0 | 1 | 0 | 0 | 0 |
(LaTiO$_3$)$_n$/(LaVO$_3$)$_n$ as a model system for unconventional charge transfer and polar metallicity | At interfaces between oxide materials, lattice and electronic reconstructions
always play important roles in exotic phenomena. In this study, the density
functional theory and maximally localized Wannier functions are employed to
investigate the (LaTiO$_3$)$_n$/(LaVO$_3$)$_n$ magnetic superlattices. The
electron transfer from Ti$^{3+}$ to V$^{3+}$ is predicted, which violates the
intuitive band alignment based on the electronic structures of LaTiO$_3$ and
LaVO$_3$. Such unconventional charge transfer quenches the magnetism of
LaTiO$_3$ layer mostly and leads to metal-insulator transition in the $n=1$
superlattice when the stacking orientation is altered. In addition, the
compatibility among the polar structure, ferrimagnetism, and metallicity is
predicted in the $n=2$ superlattice.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling sorption of emerging contaminants in biofilms | A mathematical model for emerging contaminants sorption in multispecies
biofilms, based on a continuum approach and mass conservation principles is
presented. Diffusion of contaminants within the biofilm is described using a
diffusion-reaction equation. Binding sites formation and occupation are modeled
by two systems of hyperbolic partial differential equations are mutually
connected through the two growth rate terms. The model is completed with a
system of hyperbolic equations governing the microbial species growth within
the biofilm; a system of parabolic equations for substrates diffusion and
reaction and a nonlinear ordinary differential equation describing the free
boundary evolution. Two real special cases are modelled. The first one
describes the dynamics of a free sorbent component diffusing and reacting in a
multispecies biofilm. In the second illustrative case, the fate of two
different contaminants has been modelled.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stabilizing Training of Generative Adversarial Networks through Regularization | Deep generative models based on Generative Adversarial Networks (GANs) have
demonstrated impressive sample quality but in order to work they require a
careful choice of architecture, parameter initialization, and selection of
hyper-parameters. This fragility is in part due to a dimensional mismatch or
non-overlapping support between the model distribution and the data
distribution, causing their density ratio and the associated f-divergence to be
undefined. We overcome this fundamental limitation and propose a new
regularization approach with low computational cost that yields a stable GAN
training procedure. We demonstrate the effectiveness of this regularizer across
several architectures trained on common benchmark image generation tasks. Our
regularization turns GAN models into reliable building blocks for deep
learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Spurious Vanishing Problem in Approximate Vanishing Ideal | Approximate vanishing ideal, which is a new concept from computer algebra, is
a set of polynomials that almost takes a zero value for a set of given data
points. The introduction of approximation to exact vanishing ideal has played a
critical role in capturing the nonlinear structures of noisy data by computing
the approximate vanishing polynomials. However, approximate vanishing has a
theoretical problem, which is giving rise to the spurious vanishing problem
that any polynomial turns into an approximate vanishing polynomial by
coefficient scaling. In the present paper, we propose a general method that
enables many basis construction methods to overcome this problem. Furthermore,
a coefficient truncation method is proposed that balances the theoretical
soundness and computational cost. The experiments show that the proposed method
overcomes the spurious vanishing problem and significantly increases the
accuracy of classification.
| 1 | 0 | 0 | 1 | 0 | 0 |
Opportunities for Two-color Experiments at the SASE3 undulator line of the European XFEL | X-ray Free Electron Lasers (XFELs) have been proven to generate short and
powerful radiation pulses allowing for a wide class of novel experiments. If an
XFEL facility supports the generation of two X-ray pulses with different
wavelengths and controllable delay, the range of possible experiments is
broadened even further to include X-ray-pump/X-ray-probe applications. In this
work we discuss the possibility of applying a simple and cost-effective method
for producing two-color pulses at the SASE3 soft X-ray beamline of the European
XFEL. The technique is based on the installation of a magnetic chicane in the
baseline undulator and can be accomplished in several steps. We discuss the
scientific interest of this upgrade for the Small Quantum Systems (SQS)
instrument, in connection with the high-repetition rate of the European XFEL,
and we provide start-to-end simulations up to the radiation focus on the
sample, proving the feasibility of our concept.
| 0 | 1 | 0 | 0 | 0 | 0 |
Braiding errors in interacting Majorana quantum wires | Avenues of Majorana bound states (MBSs) have become one of the primary
directions towards a possible realization of topological quantum computation.
For a Y-junction of Kitaev quantum wires, we numerically investigate the
braiding of MBSs while considering the full quasi-particle background. The two
central sources of braiding errors are found to be the fidelity loss due to the
incomplete adiabaticity of the braiding operation as well as the hybridization
of the MBS. The explicit extraction of the braiding phase in the low-energy
Majorana sector from the full many-particle Hilbert space allows us to analyze
the breakdown of the independent-particle picture of Majorana braiding.
Furthermore, we find nearest-neighbor interactions to significantly affect the
braiding performance to the better or worse, depending on the sign and
magnitude of the coupling.
| 0 | 1 | 0 | 0 | 0 | 0 |
Machine Learning for Structured Clinical Data | Research is a tertiary priority in the EHR, where the priorities are patient
care and billing. Because of this, the data is not standardized or formatted in
a manner easily adapted to machine learning approaches. Data may be missing for
a large variety of reasons ranging from individual input styles to differences
in clinical decision making, for example, which lab tests to issue. Few
patients are annotated at a research quality, limiting sample size and
presenting a moving gold standard. Patient progression over time is key to
understanding many diseases but many machine learning algorithms require a
snapshot, at a single time point, to create a usable vector form. Furthermore,
algorithms that produce black box results do not provide the interpretability
required for clinical adoption. This chapter discusses these challenges and
others in applying machine learning techniques to the structured EHR (i.e.
Patient Demographics, Family History, Medication Information, Vital Signs,
Laboratory Tests, Genetic Testing). It does not cover feature extraction from
additional sources such as imaging data or free text patient notes but the
approaches discussed can include features extracted from these sources.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hidden order and symmetry protected topological states in quantum link ladders | We show that whereas spin-1/2 one-dimensional U(1) quantum-link models (QLMs)
are topologically trivial, when implemented in ladder-like lattices these
models may present an intriguing ground-state phase diagram, which includes a
symmetry protected topological (SPT) phase that may be readily revealed by
analyzing long-range string spin correlations along the ladder legs. We propose
a simple scheme for the realization of spin-1/2 U(1) QLMs based on
single-component fermions loaded in an optical lattice with s- and p-bands,
showing that the SPT phase may be experimentally realized by adiabatic
preparation.
| 0 | 1 | 0 | 0 | 0 | 0 |
The effect of prudence on the optimal allocation in possibilistic and mixed models | In this paper two portfolio choice models are studied: a purely possibilistic
model, in which the return of a risky asset is a fuzzy number, and a mixed
model in which a probabilistic background risk is added. For the two models an
approximate formula of the optimal allocation is computed, with respect to the
possibilistic moments associated with fuzzy numbers and the indicators of the
investor risk preferences (risk aversion, prudence).
| 0 | 0 | 0 | 0 | 0 | 1 |
On universal operators and universal pairs | We study some basic properties of the class of universal operators on Hilbert
space, and provide new examples of universal operators and universal pairs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Interleaving Lattice for the APS Linac | To realize and test advanced accelerator concepts and hardware, a beamline is
being reconfigured in the Linac Extension Area (LEA) of APS linac. A
photo-cathode RF gun installed at the beginning of the APS linac will provide a
low emittance electron beam into the LEA beamline. The thermionic RF gun beam
for the APS storage ring, and the photo-cathode RF gun beam for LEA beamline
will be accelerated through the linac in an interleaved fashion. In this paper,
the design studies for interleaving lattice realization in APS linac is
described with initial experiment result
| 0 | 1 | 0 | 0 | 0 | 0 |
\textit{Ab Initio} Study of the Magnetic Behavior of Metal Hydrides: A Comparison with the Slater-Pauling Curve | We investigated the magnetic behavior of metal hydrides FeH$_{x}$, CoH$_{x}$
and NiH$_{x}$ for several concentrations of hydrogen ($x$) by using Density
Functional Theory calculations. Several structural phases of the metallic host:
bcc ($\alpha$), fcc ($\gamma$), hcp ($\varepsilon$), dhcp ($\varepsilon'$),
tetragonal structure for FeH$_{x}$ and $\varepsilon$-$\gamma$ phases for
CoH$_{x}$, were studied. We found that for CoH$_{x}$ and NiH$_{x}$ the magnetic
moment ($m$) decreases regardless the concentration $x$. However, for FeH$_{x}$
systems, $m$ increases or decreases depending on the variation in $x$. In order
to find a general trend for these changes of $m$ in magnetic metal hydrides, we
compare our results with the Slater-Pauling curve for ferromagnetic metallic
binary alloys. It is found that the $m$ of metal hydrides made of Fe, Co and Ni
fits the shape of the Slater-Pauling curve as a function of $x$. Our results
indicate that there are two main effects that determine the $m$ value due to
hydrogenation: an increase of volume causes $m$ to increase, and the addition
of an extra electron to the metal always causes it to decrease. We discuss
these behaviors in detail.
| 0 | 1 | 0 | 0 | 0 | 0 |
Secret Sharing for Cloud Data Security | Cloud computing helps reduce costs, increase business agility and deploy
solutions with a high return on investment for many types of applications.
However, data security is of premium importance to many users and often
restrains their adoption of cloud technologies. Various approaches, i.e., data
encryption, anonymization, replication and verification, help enforce different
facets of data security. Secret sharing is a particularly interesting
cryptographic technique. Its most advanced variants indeed simultaneously
enforce data privacy, availability and integrity, while allowing computation on
encrypted data. The aim of this paper is thus to wholly survey secret sharing
schemes with respect to data security, data access and costs in the
pay-as-you-go paradigm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Design and Analysis of a Secure Three Factor User Authentication Scheme Using Biometric and Smart Card | Password security can no longer provide enough security in the area of remote
user authentication. Considering this security drawback, researchers are trying
to find solution with multifactor remote user authentication system. Recently,
three factor remote user authentication using biometric and smart card has
drawn a considerable attention of the researchers. However, most of the current
proposed schemes have security flaws. They are vulnerable to attacks like user
impersonation attack, server masquerading attack, password guessing attack,
insider attack, denial of service attack, forgery attack, etc. Also, most of
them are unable to provide mutual authentication, session key agreement and
password, or smart card recovery system. Considering these drawbacks, we
propose a secure three factor user authentication scheme using biometric and
smart card. Through security analysis, we show that our proposed scheme can
overcome drawbacks of existing systems and ensure high security in remote user
authentication.
| 1 | 0 | 0 | 0 | 0 | 0 |
Real-World Modeling of a Pathfinding Robot Using Robot Operating System (ROS) | This paper presents a practical approach towards implementing pathfinding
algorithms on real-world and low-cost non- commercial hardware platforms. While
using robotics simulation platforms as a test-bed for our algorithms we easily
overlook real- world exogenous problems that are developed by external factors.
Such problems involve robot wheel slips, asynchronous motors, abnormal sensory
data or unstable power sources. The real-world dynamics tend to be very painful
even for executing simple algorithms like a Wavefront planner or A-star search.
This paper addresses designing techniques that tend to be robust as well as
reusable for any hardware platforms; covering problems like controlling
asynchronous drives, odometry offset issues and handling abnormal sensory
feedback. The algorithm implementation medium and hardware design tools have
been kept general in order to present our work as a serving platform for future
researchers and robotics enthusiast working in the field of path planning
robotics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Understanding Convolution for Semantic Segmentation | Recent advances in deep learning, especially deep convolutional neural
networks (CNNs), have led to significant improvement over previous semantic
segmentation systems. Here we show how to improve pixel-wise semantic
segmentation by manipulating convolution-related operations that are of both
theoretical and practical value. First, we design dense upsampling convolution
(DUC) to generate pixel-level prediction, which is able to capture and decode
more detailed information that is generally missing in bilinear upsampling.
Second, we propose a hybrid dilated convolution (HDC) framework in the encoding
phase. This framework 1) effectively enlarges the receptive fields (RF) of the
network to aggregate global information; 2) alleviates what we call the
"gridding issue" caused by the standard dilated convolution operation. We
evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a
state-of-art result of 80.1% mIOU in the test set at the time of submission. We
also have achieved state-of-the-art overall on the KITTI road estimation
benchmark and the PASCAL VOC2012 segmentation task. Our source code can be
found at this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
Mechanical Failure in Amorphous Solids: Scale Free Spinodal Criticality | The mechanical failure of amorphous media is a ubiquitous phenomenon from
material engineering to geology. It has been noticed for a long time that the
phenomenon is "scale-free", indicating some type of criticality. In spite of
attempts to invoke "Self-Organized Criticality", the physical origin of this
criticality, and also its universal nature, being quite insensitive to the
nature of microscopic interactions, remained elusive. Recently we proposed that
the precise nature of this critical behavior is manifested by a spinodal point
of a thermodynamic phase transition. Moreover, at the spinodal point there
exists a divergent correlation length which is associated with the
system-spanning instabilities (known also as shear bands) which are typical to
the mechanical yield. Demonstrating this requires the introduction of an "order
parameter" that is suitable for distinguishing between disordered amorphous
systems, and an associated correlation function, suitable for picking up the
growing correlation length. The theory, the order parameter, and the
correlation functions used are universal in nature and can be applied to any
amorphous solid that undergoes mechanical yield. Critical exponents for the
correlation length divergence and the system size dependence are estimated. The
phenomenon is seen at its sharpest in athermal systems, as is explained below;
in this paper we extend the discussion also to thermal systems, showing that at
sufficiently high temperatures the spinodal phenomenon is destroyed by thermal
fluctuations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Moment conditions in strong laws of large numbers for multiple sums and random measures | The validity of the strong law of large numbers for multiple sums $S_n$ of
independent identically distributed random variables $Z_k$, $k\leq n$, with
$r$-dimensional indices is equivalent to the integrability of
$|Z|(\log^+|Z|)^{r-1}$, where $Z$ is the typical summand. We consider the
strong law of large numbers for more general normalisations, without assuming
that the summands $Z_k$ are identically distributed, and prove a multiple sum
generalisation of the Brunk--Prohorov strong law of large numbers. In the case
of identical finite moments of irder $2q$ with integer $q\geq1$, we show that
the strong law of large numbers holds with the normalisation $\|n_1\cdots
n_r\|^{1/2}(\log n_1\cdots\log n_r)^{1/(2q)+\varepsilon}$ for any
$\varepsilon>0$. The obtained results are also formulated in the setting of
ergodic theorems for random measures, in particular those generated by marked
point processes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Connecting Clump Sizes in Turbulent Disk Galaxies to Instability Theory | In this letter we study the mean sizes of Halpha clumps in turbulent disk
galaxies relative to kinematics, gas fractions, and Toomre Q. We use 100~pc
resolution HST images, IFU kinematics, and gas fractions of a sample of rare,
nearby turbulent disks with properties closely matched to z~1.5-2 main-sequence
galaxies (the DYNAMO sample). We find linear correlations of normalized mean
clump sizes with both the gas fraction and the velocity dispersion-to-rotation
velocity ratio of the host galaxy. We show that these correlations are
consistent with predictions derived from a model of instabilities in a
self-gravitating disk (the so-called "violent disk instability model"). We also
observe, using a two-fluid model for Q, a correlation between the size of
clumps and self-gravity driven unstable regions. These results are most
consistent with the hypothesis that massive star forming clumps in turbulent
disks are the result of instabilities in self-gravitating gas-rich disks, and
therefore provide a direct connection between resolved clump sizes and this in
situ mechanism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Anomalous slowing down of individual human activity due to successive decision-making processes | Motivated by a host of empirical evidences revealing the bursty character of
human dynamics, we develop a model of human activity based on successive
switching between an hesitation state and a decision-realization state, with
residency times in the hesitation state distributed according to a heavy-tailed
Pareto distribution. This model is particularly reminiscent of an individual
strolling through a randomly distributed human crowd. Using a stochastic model
based on the concept of anomalous and non-Markovian Lévy walk, we show
exactly that successive decision-making processes drastically slow down the
progression of an individual faced with randomly distributed obstacles.
Specifically, we prove exactly that the average displacement exhibits a
sublinear scaling with time that finds its origins in: (i) the intrinsically
non-Markovian character of human activity, and (ii) the power law distribution
of hesitation times.
| 0 | 1 | 0 | 0 | 0 | 0 |
The spectrum, radiation conditions and the Fredholm property for the Dirichlet Laplacian in a perforated plane with semi-infinite inclusions | We consider the spectral Dirichlet problem for the Laplace operator in the
plane $\Omega^{\circ}$ with double-periodic perforation but also in the domain
$\Omega^{\bullet}$ with a semi-infinite foreign inclusion so that the
Floquet-Bloch technique and the Gelfand transform do not apply directly. We
describe waves which are localized near the inclusion and propagate along it.
We give a formulation of the problem with radiation conditions that provides a
Fredholm operator of index zero. The main conclusion concerns the spectra
$\sigma^{\circ}$ and $\sigma^{\bullet}$ of the problems in $\Omega^{\circ}$ and
$\Omega^{\bullet},$ namely we present a concrete geometry which supports the
relation $\sigma^{\circ}\varsubsetneqq\sigma^{\bullet}$ due to a new non-empty
spectral band caused by the semi-infinite inclusion called an open waveguide in
the double-periodic medium.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Semantics and Complexity of Probabilistic Logic Programs | We examine the meaning and the complexity of probabilistic logic programs
that consist of a set of rules and a set of independent probabilistic facts
(that is, programs based on Sato's distribution semantics). We focus on two
semantics, respectively based on stable and on well-founded models. We show
that the semantics based on stable models (referred to as the "credal
semantics") produces sets of probability models that dominate infinitely
monotone Choquet capacities, we describe several useful consequences of this
result. We then examine the complexity of inference with probabilistic logic
programs. We distinguish between the complexity of inference when a
probabilistic program and a query are given (the inferential complexity), and
the complexity of inference when the probabilistic program is fixed and the
query is given (the query complexity, akin to data complexity as used in
database theory). We obtain results on the inferential and query complexity for
acyclic, stratified, and cyclic propositional and relational programs,
complexity reaches various levels of the counting hierarchy and even
exponential levels.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fitting Probabilistic Index Models on Large Datasets | Recently, Thas et al. (2012) introduced a new statistical model for the
probability index. This index is defined as $P(Y \leq Y^*|X, X^*)$ where Y and
Y* are independent random response variables associated with covariates X and
X* [...] Crucially to estimate the parameters of the model, a set of
pseudo-observations is constructed. For a sample size n, a total of $n(n-1)/2$
pairwise comparisons between observations is considered. Consequently for large
sample sizes, it becomes computationally infeasible or even impossible to fit
the model as the set of pseudo-observations increases nearly quadratically. In
this dissertation, we provide two solutions to fit a probabilistic index model.
The first algorithm consists of splitting the entire data set into unique
partitions. On each of these, we fit the model and then aggregate the
estimates. A second algorithm is a subsampling scheme in which we select $K <<
n$ observations without replacement and after B iterations aggregate the
estimates. In Monte Carlo simulations, we show how the partitioning algorithm
outperforms the latter [...] We illustrate the partitioning algorithm and the
interpretation of the probabilistic index model on a real data set (Przybylski
and Weinstein, 2017) of n = 116,630 where we compare it against the ordinary
least squares method. By modelling the probabilistic index, we give an
intuitive and meaningful quantification of the effect of the time adolescents
spend using digital devices such as smartphones on self-reported mental
well-being. We show how moderate usage is associated with an increased
probability of reporting a higher mental well-being compared to random
adolescents who do not use a smartphone. On the other hand, adolescents who
excessively use their smartphone are associated with a higher probability of
reporting a lower mental well-being than randomly chosen peers who do not use a
smartphone.[...]
| 0 | 0 | 0 | 1 | 0 | 0 |
BICEP2 / Keck Array IX: New Bounds on Anisotropies of CMB Polarization Rotation and Implications for Axion-Like Particles and Primordial Magnetic Fields | We present the strongest constraints to date on anisotropies of CMB
polarization rotation derived from $150$ GHz data taken by the BICEP2 & Keck
Array CMB experiments up to and including the 2014 observing season (BK14). The
definition of polarization angle in BK14 maps has gone through self-calibration
in which the overall angle is adjusted to minimize the observed $TB$ and $EB$
power spectra. After this procedure, the $QU$ maps lose sensitivity to a
uniform polarization rotation but are still sensitive to anisotropies of
polarization rotation. This analysis places constraints on the anisotropies of
polarization rotation, which could be generated by CMB photons interacting with
axion-like pseudoscalar fields or Faraday rotation induced by primordial
magnetic fields. The sensitivity of BK14 maps ($\sim 3\mu$K-arcmin) makes it
possible to reconstruct anisotropies of polarization rotation angle and measure
their angular power spectrum much more precisely than previous attempts. Our
data are found to be consistent with no polarization rotation anisotropies,
improving the upper bound on the amplitude of the rotation angle spectrum by
roughly an order of magnitude compared to the previous best constraints. Our
results lead to an order of magnitude better constraint on the coupling
constant of the Chern-Simons electromagnetic term $f_a \geq 1.7\times
10^2\times (H_I/2\pi)$ ($2\sigma$) than the constraint derived from uniform
rotation, where $H_I$ is the inflationary Hubble scale. The upper bound on the
amplitude of the primordial magnetic fields is 30nG ($2\sigma$) from the
polarization rotation anisotropies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unsupervised Object Discovery and Segmentation of RGBD-images | In this paper we introduce a system for unsupervised object discovery and
segmentation of RGBD-images. The system models the sensor noise directly from
data, allowing accurate segmentation without sensor specific hand tuning of
measurement noise models making use of the recently introduced Statistical
Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the
system is able to apply probabilistic inference, enabling reliable segmentation
in previously challenging scenarios. In addition, we introduce new methods for
filtering out false positives, significantly improving the signal to noise
ratio. We show that the system significantly outperform state-of-the-art in on
a challenging real-world dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Enabling large-scale viscoelastic calculations via neural network acceleration | One of the most significant challenges involved in efforts to understand the
effects of repeated earthquake cycle activity are the computational costs of
large-scale viscoelastic earthquake cycle models. Computationally intensive
viscoelastic codes must be evaluated thousands of times and locations, and as a
result, studies tend to adopt a few fixed rheological structures and model
geometries, and examine the predicted time-dependent deformation over short
(<10 yr) time periods at a given depth after a large earthquake. Training a
deep neural network to learn a computationally efficient representation of
viscoelastic solutions, at any time, location, and for a large range of
rheological structures, allows these calculations to be done quickly and
reliably, with high spatial and temporal resolution. We demonstrate that this
machine learning approach accelerates viscoelastic calculations by more than
50,000%. This magnitude of acceleration will enable the modeling of
geometrically complex faults over thousands of earthquake cycles across wider
ranges of model parameters and at larger spatial and temporal scales than have
been previously possible.
| 0 | 1 | 0 | 0 | 0 | 0 |
Speaker identification from the sound of the human breath | This paper examines the speaker identification potential of breath sounds in
continuous speech. Speech is largely produced during exhalation. In order to
replenish air in the lungs, speakers must periodically inhale. When inhalation
occurs in the midst of continuous speech, it is generally through the mouth.
Intra-speech breathing behavior has been the subject of much study, including
the patterns, cadence, and variations in energy levels. However, an often
ignored characteristic is the {\em sound} produced during the inhalation phase
of this cycle. Intra-speech inhalation is rapid and energetic, performed with
open mouth and glottis, effectively exposing the entire vocal tract to enable
maximum intake of air. This results in vocal tract resonances evoked by
turbulence that are characteristic of the speaker's speech-producing apparatus.
Consequently, the sounds of inhalation are expected to carry information about
the speaker's identity. Moreover, unlike other spoken sounds which are subject
to active control, inhalation sounds are generally more natural and less
affected by voluntary influences. The goal of this paper is to demonstrate that
breath sounds are indeed bio-signatures that can be used to identify speakers.
We show that these sounds by themselves can yield remarkably accurate speaker
recognition with appropriate feature representations and classification
frameworks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.