title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Robust Adversarial Reinforcement Learning | Deep neural networks coupled with fast simulation and improved computation
have led to recent successes in the field of reinforcement learning (RL).
However, most current RL-based approaches fail to generalize since: (a) the gap
between simulation and real world is so large that policy-learning approaches
fail to transfer; (b) even if policy learning is done in real world, the data
scarcity leads to failed generalization from training to test scenarios (e.g.,
due to different friction or object masses). Inspired from H-infinity control
methods, we note that both modeling errors and differences in training and test
scenarios can be viewed as extra forces/disturbances in the system. This paper
proposes the idea of robust adversarial reinforcement learning (RARL), where we
train an agent to operate in the presence of a destabilizing adversary that
applies disturbance forces to the system. The jointly trained adversary is
reinforced -- that is, it learns an optimal destabilization policy. We
formulate the policy learning as a zero-sum, minimax objective function.
Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah,
Swimmer, Hopper and Walker2d) conclusively demonstrate that our method (a)
improves training stability; (b) is robust to differences in training/test
conditions; and c) outperform the baseline even in the absence of the
adversary.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic behaviour of Multilamellar Vesicles under Poiseuille flow | Surfactant solutions exhibit multilamellar surfactant vesicles (MLVs) under
flow conditions and in concentration ranges which are found in a large number
of industrial applications. MLVs are typically formed from a lamellar phase and
play an important role in determining the rheological properties of surfactant
solutions. Despite the wide literature on the collective dynamics of flowing
MLVs, investigations on the flow behavior of single MLVs are scarce. In this
work, we investigate a concentrated aqueous solution of linear alkylbenzene
sulfonic acid (HLAS), characterized by MLVs dispersed in an isotropic micellar
phase. Rheological tests show that the HLAS solution is a shear-thinning fluid
with a power law index dependent on the shear rate. Pressure-driven shear flow
of the HLAS solution in glass capillaries is investigated by high-speed video
microscopy and image analysis. The so obtained velocity profiles provide
evidence of a power-law fluid behaviour of the HLAS solution and images show a
flow-focusing effect of the lamellar phase in the central core of the
capillary. The flow behavior of individual MLVs shows analogies with that of
unilamellar vesicles and emulsion droplets. Deformed MLVs exhibit typical
shapes of unilamellar vesicles, such as parachute and bullet-like. Furthermore,
MLV velocity follows the classical Hetsroni theory for droplets provided that
the power law shear dependent viscosity of the HLAS solution is taken into
account. The results of this work are relevant for the processing of
surfactant-based systems in which the final properties depend on flow-induced
morphology, such as cosmetic formulations and food products.
| 0 | 1 | 0 | 0 | 0 | 0 |
Approximate fixed points and B-amenable groups | A topological group $G$ is B-amenable if and only if every continuous affine
action of $G$ on a bounded convex subset of a locally convex space has an
approximate fixed point. Similar results hold more generally for slightly
uniformly continuous semigroup actions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Radiation hardness of small-pitch 3D pixel sensors up to HL-LHC fluences | A new generation of 3D silicon pixel detectors with a small pixel size of
50$\times$50 and 25$\times$100 $\mu$m$^{2}$ is being developed for the HL-LHC
tracker upgrades. The radiation hardness of such detectors was studied in beam
tests after irradiation to HL-LHC fluences up to $1.4\times10^{16}$
n$_{\mathrm{eq}}$/cm$^2$. At this fluence, an operation voltage of only 100 V
is needed to achieve 97% hit efficiency, with a power dissipation of 13
mW/cm$^2$ at -25$^{\circ}$C, considerably lower than for previous 3D sensor
generations and planar sensors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Simulating Cosmic Microwave Background anisotropy measurements for Microwave Kinetic Inductance Devices | Microwave Kinetic Inductance Devices (MKIDs) are poised to allow for
massively and natively multiplexed photon detectors arrays and are a natural
choice for the next-generation CMB-Stage 4 experiment which will require 105
detectors. In this proceed- ing we discuss what noise performance of present
generation MKIDs implies for CMB measurements. We consider MKID noise spectra
and simulate a telescope scan strategy which projects the detector noise onto
the CMB sky. We then analyze the simulated CMB + MKID noise to understand
particularly low frequency noise affects the various features of the CMB, and
thusly set up a framework connecting MKID characteristics with scan strategies,
to the type of CMB signals we may probe with such detectors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimized State Space Grids for Abstractions | The practical impact of abstraction-based controller synthesis methods is
currently limited by the immense computational effort for obtaining
abstractions. In this note we focus on a recently proposed method to compute
abstractions whose state space is a cover of the state space of the plant by
congruent hyper-intervals. The problem of how to choose the size of the
hyper-intervals so as to obtain computable and useful abstractions is unsolved.
This note provides a twofold contribution towards a solution. Firstly, we
present a functional to predict the computational effort for the abstraction to
be computed. Secondly, we propose a method for choosing the aspect ratio of the
hyper-intervals when their volume is fixed. More precisely, we propose to
choose the aspect ratio so as to minimize a predicted number of transitions of
the abstraction to be computed, in order to reduce the computational effort. To
this end, we derive a functional to predict the number of transitions in
dependence of the aspect ratio. The functional is to be minimized subject to
suitable constraints. We characterize the unique solvability of the respective
optimization problem and prove that it transforms, under appropriate
assumptions, into an equivalent convex problem with strictly convex objective.
The latter problem can then be globally solved using standard numerical
methods. We demonstrate our approach on an example.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Task Functional Connectivity MRI Analysis | We propose a novel denoising framework for task functional Magnetic Resonance
Imaging (tfMRI) data to delineate the high-resolution spatial pattern of the
brain functional connectivity via dictionary learning and sparse coding (DLSC).
In order to address the limitations of the unsupervised DLSC-based fMRI
studies, we utilize the prior knowledge of task paradigm in the learning step
to train a data-driven dictionary and to model the sparse representation. We
apply the proposed DLSC-based method to Human Connectome Project (HCP) motor
tfMRI dataset. Studies on the functional connectivity of cerebrocerebellar
circuits in somatomotor networks show that the DLSC-based denoising framework
can significantly improve the prominent connectivity patterns, in comparison to
the temporal non-local means (tNLM)-based denoising method as well as the case
without denoising, which is consistent and neuroscientifically meaningful
within motor area. The promising results show that the proposed method can
provide an important foundation for the high-resolution functional connectivity
analysis, and provide a better approach for fMRI preprocessing.
| 1 | 0 | 0 | 1 | 0 | 0 |
Optimal control of a Vlasov-Poisson plasma by an external magnetic field - The basics for variational calculus | We consider the three dimensional Vlasov-Poisson system that is equipped with
an external magnetic field to describe a plasma. The aim of various concrete
applications is to control a plasma in a desired fashion. This can be modeled
by an optimal control problem. For that reason the basics for calculus of
variations will be introduced in this paper. We have to find a suitable class
of fields that are admissible for this procedure as they provide unique global
solutions of the Vlasov-Poisson system. Then we can define a field-state
operator that maps any admissible field onto its corresponding distribution
function. We will show that this field-state operator is Lipschitz continuous
and (weakly) compact. Last we will consider a model problem with a tracking
type cost functional and we will show that this optimal control problem has at
least one globally optimal solution.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lensless Photography with only an image sensor | Photography usually requires optics in conjunction with a recording device
(an image sensor). Eliminating the optics could lead to new form factors for
cameras. Here, we report a simple demonstration of imaging using a bare CMOS
sensor that utilizes computation. The technique relies on the space variant
point-spread functions resulting from the interaction of a point source in the
field of view with the image sensor. These space-variant point-spread functions
are combined with a reconstruction algorithm in order to image simple objects
displayed on a discrete LED array as well as on an LCD screen. We extended the
approach to video imaging at the native frame rate of the sensor. Finally, we
performed experiments to analyze the parametric impact of the object distance.
Improving the sensor designs and reconstruction algorithms can lead to useful
cameras without optics.
| 1 | 1 | 0 | 0 | 0 | 0 |
Truth-Telling Mechanism for Secure Two-Way Relay Communications with Energy-Harvesting Revenue | This paper brings the novel idea of paying the utility to the winning agents
in terms of some physical entity in cooperative communications. Our setting is
a secret two-way communication channel where two transmitters exchange
information in the presence of an eavesdropper. The relays are selected from a
set of interested parties such that the secrecy sum rate is maximized. In
return, the selected relay nodes' energy harvesting requirements will be
fulfilled up to a certain threshold through their own payoff so that they have
the natural incentive to be selected and involved in the communication.
However, relays may exaggerate their private information in order to improve
their chance to be selected. Our objective is to develop a mechanism for relay
selection that enforces them to reveal the truth since otherwise they may be
penalized. We also propose a joint cooperative relay beamforming and transmit
power optimization scheme based on an alternating optimization approach. Note
that the problem is highly non-convex since the objective function appears as a
product of three correlated Rayleigh quotients. While a common practice in the
existing literature is to optimize the relay beamforming vector for given
transmit power via rank relaxation, we propose a second-order cone programming
(SOCP)-based approach in this paper which requires a significantly lower
computational task. The performance of the incentive control mechanism and the
optimization algorithm has been evaluated through numerical simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Prediction of Individual Outcomes for Asthma Sufferers | We consider the problem of individual-specific medication level
recommendation (initiation, removal, increase, or decrease) for asthma
sufferers. Asthma is one of the most common chronic diseases in both adults and
children, affecting 8% of the US population and costing $37-63 billion/year in
the US. Asthma is a complex disease, whose symptoms may wax and wane, making it
difficult for clinicians to predict outcomes and prognosis. Improved ability to
predict prognosis can inform decision making and may promote conversations
between clinician and provider around optimizing medication therapy. Data from
the US Medical Expenditure Panel Survey (MEPS) years 2000-2010 were used to fit
a longitudinal model for a multivariate response of adverse events (Emergency
Department or In-patient visits, excessive rescue inhaler use, and oral steroid
use). To reduce bias in the estimation of medication effects, medication level
was treated as a latent process which was restricted to be consistent with
prescription refill data. This approach is demonstrated to be effective in the
MEPS cohort via predictions on a validation hold out set and a synthetic data
simulation study. This framework can be easily generalized to medication
decisions for other conditions as well.
| 0 | 0 | 0 | 1 | 0 | 0 |
Bayesian Probabilistic Numerical Methods | The emergent field of probabilistic numerics has thus far lacked clear
statistical principals. This paper establishes Bayesian probabilistic numerical
methods as those which can be cast as solutions to certain inverse problems
within the Bayesian framework. This allows us to establish general conditions
under which Bayesian probabilistic numerical methods are well-defined,
encompassing both non-linear and non-Gaussian models. For general computation,
a numerical approximation scheme is proposed and its asymptotic convergence
established. The theoretical development is then extended to pipelines of
computation, wherein probabilistic numerical methods are composed to solve more
challenging numerical tasks. The contribution highlights an important research
frontier at the interface of numerical analysis and uncertainty quantification,
with a challenging industrial application presented.
| 1 | 0 | 1 | 1 | 0 | 0 |
The evolution of the temperature field during cavity collapse in liquid nitromethane. Part II: Reactive case | We study effect of cavity collapse in non-ideal explosives as a means of
controlling their sensitivity. The main aim is to understand the origin of
localised temperature peaks (hot spots) that play a leading order role at early
ignition stages. Thus, we perform 2D and 3D numerical simulations of shock
induced single gas-cavity collapse in nitromethane. Ignition is the result of a
complex interplay between fluid dynamics and exothermic chemical reaction. In
part I of this work we focused on the hydrodynamic effects in the collapse
process by switching off the reaction terms in the mathematical model. Here, we
reinstate the reactive terms and study the collapse of the cavity in the
presence of chemical reactions. We use a multi-phase formulation which
overcomes current challenges of cavity collapse modelling in reactive media to
obtain oscillation-free temperature fields across material interfaces to allow
the use of a temperature-based reaction rate law. The mathematical and physical
models are validated against experimental and analytic data. We identify which
of the previously-determined (in part I of this work) high-temperature regions
lead to ignition and comment on their reactive strength and reaction growth
rate. We quantify the sensitisation of nitromethane by the collapse of the
cavity by comparing ignition times of neat and single-cavity material; the
ignition occurs in less than half the ignition time of the neat material. We
compare 2D and 3D simulations to examine the change in topology, temperature
and reactive strength of the hot spots by the third dimension. It is apparent
that belated ignition times can be avoided by the use of 3D simulations. The
effect of the chemical reactions on the topology and strength of the hot spots
in the timescales considered is studied by comparing inert and reactive
simulations and examine maximum temperature fields and their growth rates.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantile Regression for Qualifying Match of GEFCom2017 Probabilistic Load Forecasting | We present a simple quantile regression-based forecasting method that was
applied in a probabilistic load forecasting framework of the Global Energy
Forecasting Competition 2017 (GEFCom2017). The hourly load data is log
transformed and split into a long-term trend component and a remainder term.
The key forecasting element is the quantile regression approach for the
remainder term that takes into account weekly and annual seasonalities such as
their interactions. Temperature information is only used to stabilize the
forecast of the long-term trend component. Public holidays information is
ignored. Still, the forecasting method placed second in the open data track and
fourth in the definite data track with our forecasting method, which is
remarkable given simplicity of the model. The method also outperforms the
Vanilla benchmark consistently.
| 0 | 0 | 0 | 1 | 0 | 0 |
Strong Local Nondeterminism of Spherical Fractional Brownian Motion | Let $B = \left\{ B\left( x\right),\, x\in \mathbb{S}^{2}\right\} $ be the
fractional Brownian motion indexed by the unit sphere $\mathbb{S}^{2}$ with
index $0<H\leq \frac{1}{2}$, introduced by Istas \cite{IstasECP05}. We
establish optimal estimates for its angular power spectrum $\{d_\ell, \ell = 0,
1, 2, \ldots\}$, and then exploit its high-frequency behavior to establish the
property of its strong local nondeterminism of $B$.
| 0 | 0 | 1 | 1 | 0 | 0 |
Sparse Named Entity Classification using Factorization Machines | Named entity classification is the task of classifying text-based elements
into various categories, including places, names, dates, times, and monetary
values. A bottleneck in named entity classification, however, is the data
problem of sparseness, because new named entities continually emerge, making it
rather difficult to maintain a dictionary for named entity classification.
Thus, in this paper, we address the problem of named entity classification
using matrix factorization to overcome the problem of feature sparsity.
Experimental results show that our proposed model, with fewer features and a
smaller size, achieves competitive accuracy to state-of-the-art models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exploration of Large Networks with Covariates via Fast and Universal Latent Space Model Fitting | Latent space models are effective tools for statistical modeling and
exploration of network data. These models can effectively model real world
network characteristics such as degree heterogeneity, transitivity, homophily,
etc. Due to their close connection to generalized linear models, it is also
natural to incorporate covariate information in them. The current paper
presents two universal fitting algorithms for networks with edge covariates:
one based on nuclear norm penalization and the other based on projected
gradient descent. Both algorithms are motivated by maximizing likelihood for a
special class of inner-product models while working simultaneously for a wide
range of different latent space models, such as distance models, which allow
latent vectors to affect edge formation in flexible ways. These fitting
methods, especially the one based on projected gradient descent, are fast and
scalable to large networks. We obtain their rates of convergence for both
inner-product models and beyond. The effectiveness of the modeling approach and
fitting algorithms is demonstrated on five real world network datasets for
different statistical tasks, including community detection with and without
edge covariates, and network assisted learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Distant Supervision for Topic Classification of Tweets in Curated Streams | We tackle the challenge of topic classification of tweets in the context of
analyzing a large collection of curated streams by news outlets and other
organizations to deliver relevant content to users. Our approach is novel in
applying distant supervision based on semi-automatically identifying curated
streams that are topically focused (for example, on politics, entertainment, or
sports). These streams provide a source of labeled data to train topic
classifiers that can then be applied to categorize tweets from more
topically-diffuse streams. Experiments on both noisy labels and human
ground-truth judgments demonstrate that our approach yields good topic
classifiers essentially "for free", and that topic classifiers trained in this
manner are able to dynamically adjust for topic drift as news on Twitter
evolves.
| 1 | 0 | 0 | 0 | 0 | 0 |
Relationship Maintenance in Software Language Repositories | The context of this research is testing and building software systems and,
specifically, software language repositories (SLRs), i.e., repositories with
components for language processing (interpreters, translators, analyzers,
transformers, pretty printers, etc.). SLRs are typically set up for developing
and using metaprogramming systems, language workbenches, language definition
frameworks, executable semantic frameworks, and modeling frameworks. This work
is an inquiry into testing and building SLRs in a manner that the repository is
seen as a collection of language-typed artifacts being related by the
applications of language-typed functions or relations which serve language
processing. The notion of language is used in a broad sense to include text-,
tree-, graph-based languages as well as representations based on interchange
formats and also proprietary formats for serialization. The overall approach
underlying this research is one of language design driven by a complex case
study, i.e., a specific SLR with a significant number of processed languages
and language processors as well as a noteworthy heterogeneity in terms of
representation types and implementation languages. The knowledge gained by our
research is best understood as a declarative language design for regression
testing and build management, we introduce a corresponding language Ueber with
an executable semantics which maintains relationships between language-typed
artifacts in an SLR. The grounding of the reported research is based on the
comprehensive, formal, executable (logic programming-based) definition of the
Ueber language and its systematic application to the management of the SLR YAS
which consists of hundreds of language definition and processing components
(such as interpreters and transformations) for more than thirty languages (not
counting different representation types) with Prolog, Haskell, Java, and Python
being used as implementation languages. The importance of this work follows
from the significant costs implied by regression testing and build management
and also from the complexity of SLRs which calls for means to help with
understanding.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Emptiness Problem for Valence Automata over Graph Monoids | This work studies which storage mechanisms in automata permit decidability of
the emptiness problem. The question is formalized using valence automata, an
abstract model of automata in which the storage mechanism is given by a monoid.
For each of a variety of storage mechanisms, one can choose a (typically
infinite) monoid $M$ such that valence automata over $M$ are equivalent to
(one-way) automata with this type of storage. In fact, many important storage
mechanisms can be realized by monoids defined by finite graphs, called graph
monoids. Examples include pushdown stacks, partially blind counters (which
behave like Petri net places), blind counters (which may attain negative
values), and combinations thereof.
Hence, we study for which graph monoids the emptiness problem for valence
automata is decidable. A particular model realized by graph monoids is that of
Petri nets with a pushdown stack. For these, decidability is a long-standing
open question and we do not answer it here.
However, if one excludes subgraphs corresponding to this model, a
characterization can be achieved. Moreover, we provide a description of those
storage mechanisms for which decidability remains open. This leads to a model
that naturally generalizes both pushdown Petri nets and the priority
multicounter machines introduced by Reinhardt.
The cases that are proven decidable constitute a natural and apparently new
extension of Petri nets with decidable reachability. It is finally shown that
this model can be combined with another such extension by Atig and Ganty: We
present a further decidability result that subsumes both of these Petri net
extensions.
| 1 | 0 | 1 | 0 | 0 | 0 |
On the Optimization Landscape of Tensor Decompositions | Non-convex optimization with local search heuristics has been widely used in
machine learning, achieving many state-of-art results. It becomes increasingly
important to understand why they can work for these NP-hard problems on typical
data. The landscape of many objective functions in learning has been
conjectured to have the geometric property that "all local optima are
(approximately) global optima", and thus they can be solved efficiently by
local search algorithms. However, establishing such property can be very
difficult.
In this paper, we analyze the optimization landscape of the random
over-complete tensor decomposition problem, which has many applications in
unsupervised learning, especially in learning latent variable models. In
practice, it can be efficiently solved by gradient ascent on a non-convex
objective. We show that for any small constant $\epsilon > 0$, among the set of
points with function values $(1+\epsilon)$-factor larger than the expectation
of the function, all the local maxima are approximate global maxima.
Previously, the best-known result only characterizes the geometry in small
neighborhoods around the true components. Our result implies that even with an
initialization that is barely better than the random guess, the gradient ascent
algorithm is guaranteed to solve this problem.
Our main technique uses Kac-Rice formula and random matrix theory. To our
best knowledge, this is the first time when Kac-Rice formula is successfully
applied to counting the number of local minima of a highly-structured random
polynomial with dependent coefficients.
| 1 | 0 | 1 | 1 | 0 | 0 |
Estimate of Joule Heating in a Flat Dechirper | We have performed Joule power loss calculations for a flat dechirper. We have
considered the configurations of the beam on-axis between the two plates---for
chirp control---and for the beam especially close to one plate---for use as a
fast kicker. Our calculations use a surface impedance approach, one that is
valid when corrugation parameters are small compared to aperture (the
perturbative parameter regime). In our model we ignore effects of field
reflections at the sides of the dechirper plates, and thus expect the results
to underestimate the Joule losses. The analytical results were also tested by
numerical, time-domain simulations. We find that most of the wake power lost by
the beam is radiated out to the sides of the plates. For the case of the beam
passing by a single plate, we derive an analytical expression for the
broad-band impedance, and---in Appendix B---numerically confirm recently
developed, analytical formulas for the short-range wakes. While our theory can
be applied to the LCLS-II dechirper with large gaps, for the nominal apertures
we are not in the perturbative regime and the reflection contribution to Joule
losses is not negligible. With input from computer simulations, we estimate the
Joule power loss (assuming bunch charge of 300 pC, repetition rate of 100 kHz)
is 21~W/m for the case of two plates, and 24 W/m for the case of a single
plate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accurate Kernel Learning for Linear Gaussian Markov Processes using a Scalable Likelihood Computation | We report an exact likelihood computation for Linear Gaussian Markov
processes that is more scalable than existing algorithms for complex models and
sparsely sampled signals. Better scaling is achieved through elimination of
repeated computations in the Kalman likelihood, and by using the diagonalized
form of the state transition equation. Using this efficient computation, we
study the accuracy of kernel learning using maximum likelihood and the
posterior mean in a simulation experiment. The posterior mean with a reference
prior is more accurate for complex models and sparse sampling. Because of its
lower computation load, the maximum likelihood estimator is an attractive
option for more densely sampled signals and lower order models. We confirm
estimator behavior in experimental data through their application to speleothem
data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Near-Optimal Discrete Optimization for Experimental Design: A Regret Minimization Approach | The experimental design problem concerns the selection of k points from a
potentially large design pool of p-dimensional vectors, so as to maximize the
statistical efficiency regressed on the selected k design points. Statistical
efficiency is measured by optimality criteria, including A(verage),
D(eterminant), T(race), E(igen), V(ariance) and G-optimality. Except for the
T-optimality, exact optimization is NP-hard.
We propose a polynomial-time regret minimization framework to achieve a
$(1+\varepsilon)$ approximation with only $O(p/\varepsilon^2)$ design points,
for all the optimality criteria above.
In contrast, to the best of our knowledge, before our work, no
polynomial-time algorithm achieves $(1+\varepsilon)$ approximations for
D/E/G-optimality, and the best poly-time algorithm achieving
$(1+\varepsilon)$-approximation for A/V-optimality requires $k =
\Omega(p^2/\varepsilon)$ design points.
| 1 | 0 | 0 | 1 | 0 | 0 |
Positive semi-definite embedding for dimensionality reduction and out-of-sample extensions | In machine learning or statistics, it is often desirable to reduce the
dimensionality of high dimensional data. We propose to obtain the low
dimensional embedding coordinates as the eigenvectors of a positive
semi-definite kernel matrix. This kernel matrix is the solution of a
semi-definite program promoting a low rank solution and defined with the help
of a diffusion kernel. Besides, we also discuss an infinite dimensional
analogue of the same semi-definite program. From a practical perspective, a
main feature of our approach is the existence of a non-linear out-of-sample
extension formula of the embedding coordinates that we call a projected
Nyström approximation. This extension formula yields an extension of the
kernel matrix to a data-dependent Mercer kernel function. Although the
semi-definite program may be solved directly, we propose another strategy based
on a rank constrained formulation solved thanks to a projected power method
algorithm followed by a singular value decomposition. This strategy allows for
a reduced computational time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Bounded Projective Functions and Hyperbolic Metrics with Isolated Singularities | We establish a correspondence on a Riemann surface between hyperbolic metrics
with isolated singularities and bounded projective functions whose Schwarzian
derivatives have at most double poles and whose monodromies lie in ${\rm
PSU}(1,\,1)$. As an application, we construct explicitly a new class of
hyperbolic metrics with countably many singularities on the unit disc.
| 0 | 0 | 1 | 0 | 0 | 0 |
A State-Space Approach to Dynamic Nonnegative Matrix Factorization | Nonnegative matrix factorization (NMF) has been actively investigated and
used in a wide range of problems in the past decade. A significant amount of
attention has been given to develop NMF algorithms that are suitable to model
time series with strong temporal dependencies. In this paper, we propose a
novel state-space approach to perform dynamic NMF (D-NMF). In the proposed
probabilistic framework, the NMF coefficients act as the state variables and
their dynamics are modeled using a multi-lag nonnegative vector autoregressive
(N-VAR) model within the process equation. We use expectation maximization and
propose a maximum-likelihood estimation framework to estimate the basis matrix
and the N-VAR model parameters. Interestingly, the N-VAR model parameters are
obtained by simply applying NMF. Moreover, we derive a maximum a posteriori
estimate of the state variables (i.e., the NMF coefficients) that is based on a
prediction step and an update step, similarly to the Kalman filter. We
illustrate the benefits of the proposed approach using different numerical
simulations where D-NMF significantly outperforms its static counterpart.
Experimental results for three different applications show that the proposed
approach outperforms two state-of-the-art NMF approaches that exploit temporal
dependencies, namely a nonnegative hidden Markov model and a frame stacking
approach, while it requires less memory and computational power.
| 1 | 0 | 0 | 1 | 0 | 0 |
Active Inductive Logic Programming for Code Search | Modern search techniques either cannot efficiently incorporate human feedback
to refine search results or to express structural or semantic properties of
desired code. The key insight of our interactive code search technique ALICE is
that user feedback could be actively incorporated to allow users to easily
express and refine search queries. We design a query language to model the
structure and semantics of code as logic facts. Given a code example with user
annotations, ALICE automatically extracts a logic query from features that are
tagged as important. Users can refine the search query by labeling one or more
examples as desired (positive) or irrelevant (negative). ALICE then infers a
new logic query that separates the positives from negative examples via active
inductive logic programming. Our comprehensive and systematic simulation
experiment shows that ALICE removes a large number of false positives quickly
by actively incorporating user feedback. Its search algorithm is also robust to
noise and user labeling mistakes. Our choice of leveraging both positive and
negative examples and the nested containment structure of selected code is
effective in refining search queries. Compared with an existing technique,
Critics, ALICE does not require a user to manually construct a search pattern
and yet achieves comparable precision and recall with fewer search iterations
on average. A case study with users shows that ALICE is easy to use and helps
express complex code patterns.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dense families of modular curves, prime numbers and uniform symmetric tensor rank of multiplication in certain finite fields | We obtain new uniform bounds for the symmetric tensor rank of multiplication
in finite extensions of any finite field Fp or Fp2 where p denotes a prime
number greater or equal than 5. In this aim, we use the symmetric
Chudnovsky-type generalized algorithm applied on sufficiently dense families of
modular curves defined over Fp2 attaining the Drinfeld-Vladuts bound and on the
descent of these families to the definition field Fp. These families are
obtained thanks to prime number density theorems of type Hoheisel, in
particular a result due to Dudek (2016).
| 0 | 0 | 1 | 0 | 0 | 0 |
Continued fraction algorithms and Lagrange's theorem in ${\mathbb Q}_p$ | We present several continued fraction algorithms, each of which gives an
eventually periodic expansion for every quadratic element of ${\mathbb Q}_p$
over ${\mathbb Q}$ and gives a finite expansion for every rational number. We
also give, for each of our algorithms, the complete characterization of
elements having purely periodic expansions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Scaling and bias codes for modeling speaker-adaptive DNN-based speech synthesis systems | Most neural-network based speaker-adaptive acoustic models for speech
synthesis can be categorized into either layer-based or input-code approaches.
Although both approaches have their own pros and cons, most existing works on
speaker adaptation focus on improving one or the other. In this paper, after we
first systematically overview the common principles of neural-network based
speaker-adaptive models, we show that these approaches can be represented in a
unified framework and can be generalized further. More specifically, we
introduce the use of scaling and bias codes as generalized means for
speaker-adaptive transformation. By utilizing these codes, we can create a more
efficient factorized speaker-adaptive model and capture advantages of both
approaches while reducing their disadvantages. The experiments show that the
proposed method can improve the performance of speaker adaptation compared with
speaker adaptation based on the conventional input code.
| 1 | 0 | 0 | 1 | 0 | 0 |
Bayesian Network Regularized Regression for Modeling Urban Crime Occurrences | This paper considers the problem of statistical inference and prediction for
processes defined on networks. We assume that the network is known and measures
similarity, and our goal is to learn about an attribute associated with its
vertices. Classical regression methods are not immediately applicable to this
setting, as we would like our model to incorporate information from both
network structure and pertinent covariates. Our proposed model consists of a
generalized linear model with vertex indexed predictors and a basis expansion
of their coefficients, allowing the coefficients to vary over the network. We
employ a regularization procedure, cast as a prior distribution on the
regression coefficients under a Bayesian setup, so that the predicted responses
vary smoothly according to the topology of the network. We motivate the need
for this model by examining occurrences of residential burglary in Boston,
Massachusetts. Noting that crime rates are not spatially homogeneous, and that
the rates appear to vary sharply across regions in the city, we construct a
hierarchical model that addresses these issues and gives insight into spatial
patterns of crime occurrences. Furthermore, we examine efficient
expectation-maximization fitting algorithms and provide
computationally-friendly methods for eliciting hyper-prior parameters.
| 0 | 0 | 0 | 1 | 0 | 0 |
Similarity forces and recurrent components in human face-to-face interaction networks | We show that the social dynamics responsible for the formation of connected
components that appear recurrently in face-to-face interaction networks, find a
natural explanation in the assumption that the agents of the temporal network
reside in a hidden similarity space. Distances between the agents in this space
act as similarity forces directing their motion towards other agents in the
physical space and determining the duration of their interactions. By contrast,
if such forces are ignored in the motion of the agents recurrent components do
not form, although other main properties of such networks can still be
reproduced.
| 1 | 0 | 0 | 0 | 0 | 0 |
CELLO-3D: Estimating the Covariance of ICP in the Real World | The fusion of Iterative Closest Point (ICP) reg- istrations in existing state
estimation frameworks relies on an accurate estimation of their uncertainty. In
this paper, we study the estimation of this uncertainty in the form of a
covariance. First, we scrutinize the limitations of existing closed-form
covariance estimation algorithms over 3D datasets. Then, we set out to estimate
the covariance of ICP registrations through a data-driven approach, with over 5
100 000 registrations on 1020 pairs from real 3D point clouds. We assess our
solution upon a wide spectrum of environments, ranging from structured to
unstructured and indoor to outdoor. The capacity of our algorithm to predict
covariances is accurately assessed, as well as the usefulness of these
estimations for uncertainty estimation over trajectories. The proposed method
estimates covariances better than existing closed-form solutions, and makes
predictions that are consistent with observed trajectories.
| 1 | 0 | 0 | 0 | 0 | 0 |
Projection Theorems Using Effective Dimension | In this paper we use the theory of computing to study fractal dimensions of
projections in Euclidean spaces. A fundamental result in fractal geometry is
Marstrand's projection theorem, which shows that for every analytic set E, for
almost every line L, the Hausdorff dimension of the orthogonal projection of E
onto L is maximal. We use Kolmogorov complexity to give two new results on the
Hausdorff and packing dimensions of orthogonal projections onto lines. The
first shows that the conclusion of Marstrand's theorem holds whenever the
Hausdorff and packing dimensions agree on the set E, even if E is not analytic.
Our second result gives a lower bound on the packing dimension of projections
of arbitrary sets. Finally, we give a new proof of Marstrand's theorem using
the theory of computing.
| 1 | 0 | 1 | 0 | 0 | 0 |
Exotic pairing symmetry of interacting Dirac fermions on a $π$ flux lattice | The pairing symmetry of interacting Dirac fermions on the $\pi$-flux lattice
is studied with the determinant quantum Monte Carlo and numerical linked
cluster expansion methods. The extended $s^*$- (i.e. extended $s$-) and d-wave
pairing symmetries, which are distinct in the conventional square lattice, are
degenerate under the Landau gauge. We demonstrate that the dominant pairing
channel at strong interactions is an exotic $ds^*$-wave phase consisting of
alternating stripes of $s^*$- and d-wave phases. A complementary mean-field
analysis shows that while the $s^*$- and d-wave symmetries individually have
nodes in the energy spectrum, the $ds^*$ channel is fully gapped. The results
represent a new realization of pairing in Dirac systems, connected to the
problem of chiral d-wave pairing on the honeycomb lattice, which might be more
readily accessed by cold-atom experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Causal Structures Using Regression Invariance | We study causal inference in a multi-environment setting, in which the
functional relations for producing the variables from their direct causes
remain the same across environments, while the distribution of exogenous noises
may vary. We introduce the idea of using the invariance of the functional
relations of the variables to their causes across a set of environments. We
define a notion of completeness for a causal inference algorithm in this
setting and prove the existence of such algorithm by proposing the baseline
algorithm. Additionally, we present an alternate algorithm that has
significantly improved computational and sample complexity compared to the
baseline algorithm. The experiment results show that the proposed algorithm
outperforms the other existing algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Linear Exponential Comonads without Symmetry | The notion of linear exponential comonads on symmetric monoidal categories
has been used for modelling the exponential modality of linear logic. In this
paper we introduce linear exponential comonads on general (possibly
non-symmetric) monoidal categories, and show some basic results on them.
| 1 | 0 | 1 | 0 | 0 | 0 |
Multi-speaker Recognition in Cocktail Party Problem | This paper proposes an original statistical decision theory to accomplish a
multi-speaker recognition task in cocktail party problem. This theory relies on
an assumption that the varied frequencies of speakers obey Gaussian
distribution and the relationship of their voiceprints can be represented by
Euclidean distance vectors. This paper uses Mel-Frequency Cepstral Coefficients
to extract the feature of a voice in judging whether a speaker is included in a
multi-speaker environment and distinguish who the speaker should be. Finally, a
thirteen-dimension constellation drawing is established by mapping from
Manhattan distances of speakers in order to take a thorough consideration about
gross influential factors.
| 1 | 0 | 0 | 0 | 0 | 0 |
A DIRT-T Approach to Unsupervised Domain Adaptation | Domain adaptation refers to the problem of leveraging labeled data in a
source domain to learn an accurate model in a target domain where labels are
scarce or unavailable. A recent approach for finding a common representation of
the two domains is via domain adversarial training (Ganin & Lempitsky, 2015),
which attempts to induce a feature extractor that matches the source and target
feature distributions in some feature space. However, domain adversarial
training faces two critical limitations: 1) if the feature extraction function
has high-capacity, then feature distribution matching is a weak constraint, 2)
in non-conservative domain adaptation (where no single classifier can perform
well in both the source and target domains), training the model to do well on
the source domain hurts performance on the target domain. In this paper, we
address these issues through the lens of the cluster assumption, i.e., decision
boundaries should not cross high-density data regions. We propose two novel and
related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model,
which combines domain adversarial training with a penalty term that punishes
the violation the cluster assumption; 2) the Decision-boundary Iterative
Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model
as initialization and employs natural gradient steps to further minimize the
cluster assumption violation. Extensive empirical results demonstrate that the
combination of these two models significantly improve the state-of-the-art
performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation
benchmarks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Robust valley polarization of helium ion modified atomically thin MoS$_{2}$ | Atomically thin semiconductors have dimensions that are commensurate with
critical feature sizes of future optoelectronic devices defined using
electron/ion beam lithography. Robustness of their emergent optical and
valleytronic properties is essential for typical exposure doses used during
fabrication. Here, we explore how focused helium ion bombardment affects the
intrinsic vibrational, luminescence and valleytronic properties of atomically
thin MoS$_{2}$. By probing the disorder dependent vibrational response we
deduce the interdefect distance by applying a phonon confinement model. We show
that the increasing interdefect distance correlates with disorder-related
luminescence arising 180 meV below the neutral exciton emission. We perform
ab-initio density functional theory of a variety of defect related
morphologies, which yield first indications on the origin of the observed
additional luminescence. Remarkably, no significant reduction of free exciton
valley polarization is observed until the interdefect distance approaches a few
nanometers, namely the size of the free exciton Bohr radius. Our findings pave
the way for direct writing of sub-10 nm nanoscale valleytronic devices and
circuits using focused helium ions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sustained sensorimotor control as intermittent decisions about prediction errors: Computational framework and application to ground vehicle steering | A conceptual and computational framework is proposed for modelling of human
sensorimotor control, and is exemplified for the sensorimotor task of steering
a car. The framework emphasises control intermittency, and extends on existing
models by suggesting that the nervous system implements intermittent control
using a combination of (1) motor primitives, (2) prediction of sensory outcomes
of motor actions, and (3) evidence accumulation of prediction errors. It is
shown that approximate but useful sensory predictions in the intermittent
control context can be constructed without detailed forward models, as a
superposition of simple prediction primitives, resembling neurobiologically
observed corollary discharges. The proposed mathematical framework allows
straightforward extension to intermittent behaviour from existing
one-dimensional continuous models in the linear control and ecological
psychology traditions. Empirical observations from a driving simulator provide
support for some of the framework assumptions: It is shown that human steering
control, in routine lane-keeping and in a demanding near-limit task, is better
described as a sequence of discrete stepwise steering adjustments, than as
continuous control. Furthermore, the amplitudes of individual steering
adjustments are well predicted by a compound visual cue signalling steering
error, and even better so if also adjusting for predictions of how the same cue
is affected by previous control. Finally, evidence accumulation is shown to
explain observed covariability between inter-adjustment durations and
adjustment amplitudes, seemingly better so than the type of threshold
mechanisms that are typically assumed in existing models of intermittent
control.
| 1 | 0 | 0 | 0 | 0 | 0 |
COLOSSUS: A python toolkit for cosmology, large-scale structure, and dark matter halos | This paper introduces Colossus, a public, open-source python package for
calculations related to cosmology, the large-scale structure (LSS) of matter in
the universe, and the properties of dark matter halos. The code is designed to
be fast and easy to use, with a coherent, well-documented user interface. The
cosmology module implements Friedman-Lemaitre-Robertson-Walker cosmologies
including curvature, relativistic species, and different dark energy equations
of state, and provides fast computations of the linear matter power spectrum,
variance, and correlation function. The LSS module is concerned with the
properties of peaks in Gaussian random fields and halos in a statistical sense,
including their peak height, peak curvature, halo bias, and mass function. The
halo module deals with spherical overdensity radii and masses, density
profiles, concentration, and the splashback radius. To facilitate the rapid
exploration of these quantities, Colossus implements more than 40 different
fitting functions from the literature. I discuss the core routines in detail,
with particular emphasis on their accuracy. Colossus is available at
bitbucket.org/bdiemer/colossus.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generalized Similarity U: A Non-parametric Test of Association Based on Similarity | Second generation sequencing technologies are being increasingly used for
genetic association studies, where the main research interest is to identify
sets of genetic variants that contribute to various phenotype. The phenotype
can be univariate disease status, multivariate responses and even
high-dimensional outcomes. Considering the genotype and phenotype as two
complex objects, this also poses a general statistical problem of testing
association between complex objects. We here proposed a similarity-based test,
generalized similarity U (GSU), that can test the association between complex
objects. We first studied the theoretical properties of the test in a general
setting and then focused on the application of the test to sequencing
association studies. Based on theoretical analysis, we proposed to use
Laplacian kernel based similarity for GSU to boost power and enhance
robustness. Through simulation, we found that GSU did have advantages over
existing methods in terms of power and robustness. We further performed a whole
genome sequencing (WGS) scan for Alzherimer Disease Neuroimaging Initiative
(ADNI) data, identifying three genes, APOE, APOC1 and TOMM40, associated with
imaging phenotype. We developed a C++ package for analysis of whole genome
sequencing data using GSU. The source codes can be downloaded at
this https URL.
| 0 | 0 | 0 | 1 | 1 | 0 |
Hydra: An Accelerator for Real-Time Edge-Aware Permeability Filtering in 65nm CMOS | Many modern video processing pipelines rely on edge-aware (EA) filtering
methods. However, recent high-quality methods are challenging to run in
real-time on embedded hardware due to their computational load. To this end, we
propose an area-efficient and real-time capable hardware implementation of a
high quality EA method. In particular, we focus on the recently proposed
permeability filter (PF) that delivers promising quality and performance in the
domains of HDR tone mapping, disparity and optical flow estimation. We present
an efficient hardware accelerator that implements a tiled variant of the PF
with low on-chip memory requirements and a significantly reduced external
memory bandwidth (6.4x w.r.t. the non-tiled PF). The design has been taped out
in 65 nm CMOS technology, is able to filter 720p grayscale video at 24.8 Hz and
achieves a high compute density of 6.7 GFLOPS/mm2 (12x higher than embedded
GPUs when scaled to the same technology node). The low area and bandwidth
requirements make the accelerator highly suitable for integration into SoCs
where silicon area budget is constrained and external memory is typically a
heavily contended resource.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration | Since the matrix formed by nonlocal similar patches in a natural image is of
low rank, the nuclear norm minimization (NNM) has been widely used in various
image processing studies. Nonetheless, nuclear norm based convex surrogate of
the rank function usually over-shrinks the rank components and makes different
components equally, and thus may produce a result far from the optimum. To
alleviate the above-mentioned limitations of the nuclear norm, in this paper we
propose a new method for image restoration via the non-convex weighted Lp
nuclear norm minimization (NCW-NNM), which is able to more accurately enforce
the image structural sparsity and self-similarity simultaneously. To make the
proposed model tractable and robust, the alternative direction multiplier
method (ADMM) is adopted to solve the associated non-convex minimization
problem. Experimental results on various types of image restoration problems,
including image deblurring, image inpainting and image compressive sensing (CS)
recovery, demonstrate that the proposed method outperforms many current
state-of-the-art methods in both the objective and the perceptual qualities.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Contextual Bandit Approach for Stream-Based Active Learning | Contextual bandit algorithms -- a class of multi-armed bandit algorithms that
exploit the contextual information -- have been shown to be effective in
solving sequential decision making problems under uncertainty. A common
assumption adopted in the literature is that the realized (ground truth) reward
by taking the selected action is observed by the learner at no cost, which,
however, is not realistic in many practical scenarios. When observing the
ground truth reward is costly, a key challenge for the learner is how to
judiciously acquire the ground truth by assessing the benefits and costs in
order to balance learning efficiency and learning cost. From the information
theoretic perspective, a perhaps even more interesting question is how much
efficiency might be lost due to this cost. In this paper, we design a novel
contextual bandit-based learning algorithm and endow it with the active
learning capability. The key feature of our algorithm is that in addition to
sending a query to an annotator for the ground truth, prior information about
the ground truth learned by the learner is sent together, thereby reducing the
query cost. We prove that by carefully choosing the algorithm parameters, the
learning regret of the proposed algorithm achieves the same order as that of
conventional contextual bandit algorithms in cost-free scenarios, implying
that, surprisingly, cost due to acquiring the ground truth does not increase
the learning regret in the long-run. Our analysis shows that prior information
about the ground truth plays a critical role in improving the system
performance in scenarios where active learning is necessary.
| 1 | 0 | 0 | 0 | 0 | 0 |
Localization of ions within one-, two- and three-dimensional Coulomb crystals by a standing wave optical potential | We demonstrate light-induced localization of Coulomb-interacting particles in
multi-dimensional structures. Subwavelength localization of ions within small
multi-dimensional Coulomb crystals by an intracavity optical standing wave
field is evidenced by measuring the difference in scattering inside
symmetrically red- and blue-detuned optical lattices and is observed even for
ions undergoing substantial radial micromotion. These results are promising
steps towards the structural control of ion Coulomb crystals by optical fields
as well as for complex many-body simulations with ion crystals or for the
investigation of heat transfer at the nanoscale, and have potential
applications for ion-based cavity quantum electrodynamics, cavity optomechanics
and ultracold ion chemistry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Joint Modeling of Event Sequence and Time Series with Attentional Twin Recurrent Neural Networks | A variety of real-world processes (over networks) produce sequences of data
whose complex temporal dynamics need to be studied. More especially, the event
timestamps can carry important information about the underlying network
dynamics, which otherwise are not available from the time-series evenly sampled
from continuous signals. Moreover, in most complex processes, event sequences
and evenly-sampled times series data can interact with each other, which
renders joint modeling of those two sources of data necessary. To tackle the
above problems, in this paper, we utilize the rich framework of (temporal)
point processes to model event data and timely update its intensity function by
the synergic twin Recurrent Neural Networks (RNNs). In the proposed
architecture, the intensity function is synergistically modulated by one RNN
with asynchronous events as input and another RNN with time series as input.
Furthermore, to enhance the interpretability of the model, the attention
mechanism for the neural point process is introduced. The whole model with
event type and timestamp prediction output layers can be trained end-to-end and
allows a black-box treatment for modeling the intensity. We substantiate the
superiority of our model in synthetic data and three real-world benchmark
datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Low Auto-correlation Binary Sequences explored using Warning Propagation | The search of binary sequences with low auto-correlations (LABS) is a
discrete combinatorial optimization problem contained in the NP-hard
computational complexity class. We study this problem using Warning Propagation
(WP) , a message passing algorithm, and compare the performance of the
algorithm in the original problem and in two different disordered versions. We
show that in all the cases Warning Propagation converges to low energy minima
of the solution space. Our results highlight the importance of the local
structure of the interaction graph of the variables for the convergence time of
the algorithm and for the quality of the solutions obtained by WP. While in
general the algorithm does not provide the optimal solutions in large systems
it does provide, in polynomial time, solutions that are energetically similar
to the optimal ones. Moreover, we designed hybrid models that interpolate
between the standard LABS problem and the disordered versions of it, and
exploit them to improved the convergence time of WP and the quality of the
solutions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Estimation of the marginal expected shortfall under asymptotic independence | We study the asymptotic behavior of the marginal expected shortfall when the
two random variables are asymptotic independent but positive associated, which
is modeled by the so-called tail dependent coefficient. We construct an
estimator of the marginal expected shortfall which is shown to be
asymptotically normal. The finite sample performance of the estimator is
investigated in a small simulation study. The method is also applied to
estimate the expected amount of rainfall at a weather station given that there
is a once every 100 years rainfall at another weather station nearby.
| 0 | 0 | 1 | 1 | 0 | 0 |
Downgrade Attack on TrustZone | Security-critical tasks require proper isolation from untrusted software.
Chip manufacturers design and include trusted execution environments (TEEs) in
their processors to secure these tasks. The integrity and security of the
software in the trusted environment depend on the verification process of the
system.
We find a form of attack that can be performed on the current implementations
of the widely deployed ARM TrustZone technology. The attack exploits the fact
that the trustlet (TA) or TrustZone OS loading verification procedure may use
the same verification key and may lack proper rollback prevention across
versions. If an exploit works on an out-of-date version, but the vulnerability
is patched on the latest version, an attacker can still use the same exploit to
compromise the latest system by downgrading the software to an older and
exploitable version.
We did experiments on popular devices on the market including those from
Google, Samsung and Huawei, and found that all of them have the risk of being
attacked. Also, we show a real-world example to exploit Qualcomm's QSEE.
In addition, in order to find out which device images share the same
verification key, pattern matching schemes for different vendors are analyzed
and summarized.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Inference under the Beta Regression Model with Application to Health Care Studies | Data on rates, percentages or proportions arise frequently in many different
applied disciplines like medical biology, health care, psychology and several
others. In this paper, we develop a robust inference procedure for the beta
regression model which is used to describe such response variables taking
values in $(0, 1)$ through some related explanatory variables. In relation to
the beta regression model, the issue of robustness has been largely ignored in
the literature so far. The existing maximum likelihood based inference has
serious lack of robustness against outliers in data and generate drastically
different (erroneous) inference in presence of data contamination. Here, we
develop the robust minimum density power divergence estimator and a class of
robust Wald-type tests for the beta regression model along with several
applications. We derive their asymptotic properties and describe their
robustness theoretically through the influence function analyses. Finite sample
performances of the proposed estimators and tests are examined through suitable
simulation studies and real data applications in the context of health care and
psychology. Although we primarily focus on the beta regression models with a
fixed dispersion parameter, some indications are also provided for extension to
the variable dispersion beta regression models with an application.
| 0 | 0 | 0 | 1 | 0 | 0 |
PageRank in Undirected Random Graphs | PageRank has numerous applications in information retrieval, reputation
systems, machine learning, and graph partitioning. In this paper, we study
PageRank in undirected random graphs with an expansion property. The Chung-Lu
random graph is an example of such a graph. We show that in the limit, as the
size of the graph goes to infinity, PageR- ank can be approximated by a mixture
of the restart distribution and the vertex degree distribution. We also extend
the result to Stochastic Block Model (SBM) graphs, where we show that there is
a correction term that depends on the community partitioning.
| 0 | 0 | 1 | 0 | 0 | 0 |
Room temperature line lists for CO\2 symmetric isotopologues with \textit{ab initio} computed intensities | Remote sensing experiments require high-accuracy, preferably sub-percent,
line intensities and in response to this need we present computed room
temperature line lists for six symmetric isotopologues of carbon dioxide:
$^{13}$C$^{16}$O$_2$, $^{14}$C$^{16}$O$_2$, $^{12}$C$^{17}$O$_2$,
$^{12}$C$^{18}$O$_2$, $^{13}$C$^{17}$O$_2$ and $^{13}$C$^{18}$O$_2$, covering
the range 0-8000 \cm. Our calculation scheme is based on variational nuclear
motion calculations and on a reliability analysis of the generated line
intensities. Rotation-vibration wavefunctions and energy levels are computed
using the DVR3D software suite and a high quality semi-empirical potential
energy surface (PES), followed by computation of intensities using an
\abinitio\ dipole moment surface (DMS). Four line lists are computed for each
isotopologue to quantify sensitivity to minor distortions of the PES/DMS.
Reliable lines are benchmarked against recent state-of-the-art measurements and
against the HITRAN2012 database, supporting the claim that the majority of line
intensities for strong bands are predicted with sub-percent accuracy. Accurate
line positions are generated using an effective Hamiltonian. We recommend the
use of these line lists for future remote sensing studies and their inclusion
in databases.
| 0 | 1 | 0 | 0 | 0 | 0 |
The GIT moduli of semistable pairs consisting of a cubic curve and a line on ${\mathbb P}^{2}$ | We discuss the GIT moduli of semistable pairs consisting of a cubic curve and
a line on the projective plane. We study in some detail this moduli and compare
it with another moduli suggested by Alexeev. It is the moduli of pairs (with no
specified semi-abelian action) consisting of a cubic curve with at worst nodal
singularities and a line which does not pass through singular points of the
cubic curve. Meanwhile, we make a comparison between Nakamura's
compactification of the moduli of level three elliptic curves and these two
moduli spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Genus of the Moonshine Module | We provide a novel and simple description of Schellekens' seventy-one affine
Kac-Moody structures of self-dual vertex operator algebras of central charge 24
by utilizing cyclic subgroups of the glue codes of the Niemeier lattices with
roots. We also discuss a possible uniform construction procedure of the
self-dual vertex operator algebras of central charge 24 starting from the Leech
lattice. This also allows us to consider the uniqueness question for all
non-trivial affine Kac-Moody structures. We finally discuss our description
from a Lorentzian viewpoint.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Deep Learning-based Reconstruction of Cosmic Ray-induced Air Showers | We describe a method of reconstructing air showers induced by cosmic rays
using deep learning techniques. We simulate an observatory consisting of
ground-based particle detectors with fixed locations on a regular grid. The
detector's responses to traversing shower particles are signal amplitudes as a
function of time, which provide information on transverse and longitudinal
shower properties. In order to take advantage of convolutional network
techniques specialized in local pattern recognition, we convert all information
to the image-like grid of the detectors. In this way, multiple features, such
as arrival times of the first particles and optimized characterizations of time
traces, are processed by the network. The reconstruction quality of the cosmic
ray arrival direction turns out to be competitive with an analytic
reconstruction algorithm. The reconstructed shower direction, energy and shower
depth show the expected improvement in resolution for higher cosmic ray energy.
| 0 | 1 | 0 | 0 | 0 | 0 |
The unexpected resurgence of Weyl geometry in late 20-th century physics | Weyl's original scale geometry of 1918 ("purely infinitesimal geometry") was
withdrawn by its author from physical theorizing in the early 1920s. It had a
comeback in the last third of the 20th century in different contexts: scalar
tensor theories of gravity, foundations of gravity, foundations of quantum
mechanics, elementary particle physics, and cosmology. It seems that Weyl
geometry continues to offer an open research potential for the foundations of
physics even after the turn to the new millennium.
| 0 | 1 | 1 | 0 | 0 | 0 |
Discrete Dynamic Causal Modeling and Its Relationship with Directed Information | This paper explores the discrete Dynamic Causal Modeling (DDCM) and its
relationship with Directed Information (DI). We prove the conditional
equivalence between DDCM and DI in characterizing the causal relationship
between two brain regions. The theoretical results are demonstrated using fMRI
data obtained under both resting state and stimulus based state. Our numerical
analysis is consistent with that reported in previous study.
| 0 | 0 | 0 | 1 | 0 | 0 |
Two-step approach to scheduling quantum circuits | As the effort to scale up existing quantum hardware proceeds, it becomes
necessary to schedule quantum gates in a way that minimizes the number of
operations. There are three constraints that have to be satisfied: the order or
dependency of the quantum gates in the specific algorithm, the fact that any
qubit may be involved in at most one gate at a time, and the restriction that
two-qubit gates are implementable only between connected qubits. The last
aspect implies that the compilation depends not only on the algorithm, but also
on hardware properties like connectivity. Here we suggest a two-step approach
in which logical gates are initially scheduled neglecting connectivity
considerations, while routing operations are added at a later step in a way
that minimizes their overhead. We rephrase the subtasks of gate scheduling in
terms of graph problems like edge-coloring and maximum subgraph isomorphism.
While this approach is general, we specialize to a one dimensional array of
qubits to propose a routing scheme that is minimal in the number of exchange
operations. As a practical application, we schedule the Quantum Approximate
Optimization Algorithm in a linear geometry and quantify the reduction in the
number of gates and circuit depth that results from increasing the efficacy of
the scheduling strategies.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Study of the Allan Variance for Constant-Mean Non-Stationary Processes | The Allan Variance (AV) is a widely used quantity in areas focusing on error
measurement as well as in the general analysis of variance for autocorrelated
processes in domains such as engineering and, more specifically, metrology. The
form of this quantity is widely used to detect noise patterns and indications
of stability within signals. However, the properties of this quantity are not
known for commonly occurring processes whose covariance structure is
non-stationary and, in these cases, an erroneous interpretation of the AV could
lead to misleading conclusions. This paper generalizes the theoretical form of
the AV to some non-stationary processes while at the same time being valid also
for weakly stationary processes. Some simulation examples show how this new
form can help to understand the processes for which the AV is able to
distinguish these from the stationary cases and hence allow for a better
interpretation of this quantity in applied cases.
| 0 | 0 | 1 | 1 | 0 | 0 |
Foresight: Recommending Visual Insights | Current tools for exploratory data analysis (EDA) require users to manually
select data attributes, statistical computations and visual encodings. This can
be daunting for large-scale, complex data. We introduce Foresight, a system
that helps the user rapidly discover visual insights from large
high-dimensional datasets. Formally, an "insight" is a strong manifestation of
a statistical property of the data, e.g., high correlation between two
attributes, high skewness or concentration about the mean of a single
attribute, a strong clustering of values, and so on. For each insight type,
Foresight initially presents visualizations of the top k instances in the data,
based on an appropriate ranking metric. The user can then look at "nearby"
insights by issuing "insight queries" containing constraints on insight
strengths and data attributes. Thus the user can directly explore the space of
insights, rather than the space of data dimensions and visual encodings as in
other visual recommender systems. Foresight also provides "global" views of
insight space to help orient the user and ensure a thorough exploration
process. Furthermore, Foresight facilitates interactive exploration of large
datasets through fast, approximate sketching.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization | We consider the stochastic composition optimization problem proposed in
\cite{wang2017stochastic}, which has applications ranging from estimation to
statistical and machine learning. We propose the first ADMM-based algorithm
named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly
convex and Lipschitz smooth objectives, and has a convergence rate of $O( \log
S/S)$, which improves upon the $O(S^{-4/9})$ rate in
\cite{wang2016accelerating} when the objective is convex and Lipschitz smooth.
Moreover, com-SVR-ADMM possesses a rate of $O(1/\sqrt{S})$ when the objective
is convex but without Lipschitz smoothness. We also conduct experiments and
show that it outperforms existing algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Hierarchical Behavioral Repertoires with Unsupervised Descriptors | Enabling artificial agents to automatically learn complex, versatile and
high-performing behaviors is a long-lasting challenge. This paper presents a
step in this direction with hierarchical behavioral repertoires that stack
several behavioral repertoires to generate sophisticated behaviors. Each
repertoire of this architecture uses the lower repertoires to create complex
behaviors as sequences of simpler ones, while only the lowest repertoire
directly controls the agent's movements. This paper also introduces a novel
approach to automatically define behavioral descriptors thanks to an
unsupervised neural network that organizes the produced high-level behaviors.
The experiments show that the proposed architecture enables a robot to learn
how to draw digits in an unsupervised manner after having learned to draw lines
and arcs. Compared to traditional behavioral repertoires, the proposed
architecture reduces the dimensionality of the optimization problems by orders
of magnitude and provides behaviors with a twice better fitness. More
importantly, it enables the transfer of knowledge between robots: a
hierarchical repertoire evolved for a robotic arm to draw digits can be
transferred to a humanoid robot by simply changing the lowest layer of the
hierarchy. This enables the humanoid to draw digits although it has never been
trained for this task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Indirect Image Registration with Large Diffeomorphic Deformations | The paper adapts the large deformation diffeomorphic metric mapping framework
for image registration to the indirect setting where a template is registered
against a target that is given through indirect noisy observations. The
registration uses diffeomorphisms that transform the template through a (group)
action. These diffeomorphisms are generated by solving a flow equation that is
defined by a velocity field with certain regularity. The theoretical analysis
includes a proof that indirect image registration has solutions (existence)
that are stable and that converge as the data error tends so zero, so it
becomes a well-defined regularization method. The paper concludes with examples
of indirect image registration in 2D tomography with very sparse and/or highly
noisy data.
| 1 | 0 | 1 | 0 | 0 | 0 |
Morphological estimators on Sunyaev--Zel'dovich maps of MUSIC clusters of galaxies | The determination of the morphology of galaxy clusters has important
repercussion on their cosmological and astrophysical studies. In this paper we
address the morphological characterisation of synthetic maps of the
Sunyaev--Zel'dovich (SZ) effect produced for a sample of 258 massive clusters
($M_{vir}>5\times10^{14}h^{-1}$M$_\odot$ at $z=0$), extracted from the MUSIC
hydrodynamical simulations. Specifically, we apply five known morphological
parameters, already used in X-ray, two newly introduced ones, and we combine
them together in a single parameter. We analyse two sets of simulations
obtained with different prescriptions of the gas physics (non radiative and
with cooling, star formation and stellar feedback) at four redshifts between
0.43 and 0.82. For each parameter we test its stability and efficiency to
discriminate the true cluster dynamical state, measured by theoretical
indicators. The combined parameter discriminates more efficiently relaxed and
disturbed clusters. This parameter had a mild correlation with the hydrostatic
mass ($\sim 0.3$) and a strong correlation ($\sim 0.8$) with the offset between
the SZ centroid and the cluster centre of mass. The latter quantity results as
the most accessible and efficient indicator of the dynamical state for SZ
studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hierarchy of exchange interactions in the triangular-lattice spin-liquid YbMgGaO$_{4}$ | The spin-1/2 triangular lattice antiferromagnet YbMgGaO$_{4}$ has attracted
recent attention as a quantum spin-liquid candidate with the possible presence
of off-diagonal anisotropic exchange interactions induced by spin-orbit
coupling. Whether a quantum spin-liquid is stabilized or not depends on the
interplay of various exchange interactions with chemical disorder that is
inherent to the layered structure of the compound. We combine time-domain
terahertz spectroscopy and inelastic neutron scattering measurements in the
field polarized state of YbMgGaO$_{4}$ to obtain better microscopic insights on
its exchange interactions. Terahertz spectroscopy in this fashion functions as
high-field electron spin resonance and probes the spin-wave excitations at the
Brillouin zone center, ideally complementing neutron scattering. A global
spin-wave fit to all our spectroscopic data at fields over 4T, informed by the
analysis of the terahertz spectroscopy linewidths, yields stringent constraints
on $g$-factors and exchange interactions. Our results paint YbMgGaO$_{4}$ as an
easy-plane XXZ antiferromagnet with the combined and necessary presence of
sub-leading next-nearest neighbor and weak anisotropic off-diagonal
nearest-neighbor interactions. Moreover, the obtained $g$-factors are
substantially different from previous reports. This works establishes the
hierarchy of exchange interactions in YbMgGaO$_{4}$ from high-field data alone
and thus strongly constrains possible mechanisms responsible for the observed
spin-liquid phenomenology.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pulsar braking and the P-Pdot diagram | The location of radio pulsars in the period-period derivative (P-Pdot) plane
has been a key diagnostic tool since the early days of pulsar astronomy. Of
particular importance is how pulsars evolve through the P-Pdot diagram with
time. Here we show that the decay of the inclination angle (alpha-dot) between
the magnetic and rotation axes plays a critical role. In particular, alpha-dot
strongly impacts on the braking torque, an effect which has been largely
ignored in previous work. We carry out simulations which include a negative
alpha-dot term, and show that it is possible to reproduce the observational
P-Pdot diagram without the need for either pulsars with long birth periods or
magnetic field decay. Our best model indicates a birth rate of 1 radio pulsar
per century and a total Galactic population of ~20000 pulsars beaming towards
Earth.
| 0 | 1 | 0 | 0 | 0 | 0 |
Uniqueness of stable capillary hypersurfaces in a ball | In this paper we prove that any immersed stable capillary hypersurfaces in a
ball in space forms are totally umbilical. This solves completely a
long-standing open problem. In the proof one of crucial ingredients is a new
Minkowski type formula. We also prove a Heintze-Karcher-Ros type inequality for
hypersurfaces in a ball, which, together with the new Minkowski formula, yields
a new proof of Alexandrov's Theorem for embedded CMC hypersurfaces in a ball
with free boundary.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Potential Recoiling Supermassive Black Hole CXO J101527.2+625911 | We have carried out a systematic search for recoiling supermassive black
holes (rSMBH) using the Chandra Source and SDSS Cross Matched Catalog. From the
survey, we have detected a potential rSMBH, 'CXO J101527.2+625911' at z=0.3504.
The CXO J101527.2+625911 has a spatially offset (1.26$\pm$0.05 kpc) active SMBH
and kinematically offset broad emission lines (175$\pm$25 km s$^{\rm -1}$
relative to systemic velocity). The observed spatial and velocity offsets
suggest this galaxy could be a rSMBH, but we also have considered a possibility
of dual SMBH scenario. The column density towards the galaxy center was found
to be Compton thin, but no X-ray source was detected. The non-detection of the
X-ray source in the nucleus suggests either there is no obscured actively
accreting SMBH, or there exists an SMBH but has a low accretion rate (i.e.
low-luminosity AGN (LLAGN)). The possibility of the LLAGN was investigated and
found to be unlikely based on the H$\alpha$ luminosity, radio power, and
kinematic arguments. This, along with the null detection of X-ray source in the
nucleus supports our hypothesis that the CXO J101527.2+625911 is a rSMBH. Our
GALFIT analysis shows the host galaxy to be a bulge-dominated elliptical. The
weak morphological disturbance and small spatial and velocity offsets suggest
that CXO J101527.2+625911 could be in the final stage of merging process and
about to turn into a normal elliptical galaxy.
| 0 | 1 | 0 | 0 | 0 | 0 |
WLAN Performance Analysis Ibrahim Group of industries Faisalabad Pakistan | Now a days several organizations are moving their LAN foundation towards
remote LAN frame work. The purpose for this is extremely straight forward
multinational organizations needs their clients surprise about their office
surroundings and they additionally need to make wire free environment in their
workplaces. Much IT equipment moved on Wireless for instance all in one Pc
portable workstations Wireless IP telephones. Another thing is that step by
step WLAN innovation moving towards extraordinary effectiveness. In this
exploration work Wireless LAN innovation running in Ibrahim Group gathering of
commercial enterprises Faisalabad has been investigated in term of their
equipment, Wireless signal quality, data transmission, auto channel moving, and
security in WLAN system. This examination work required physical proving
ground, some WLAN system analyzer (TamoSof throughput) software, hardware point
of interest, security testing programming. The investigation displayed in this
examination has fill two key needs. One determination is to accept this kind of
system interconnection could be broke down utilizing the exploratory models of
the two system bits (wired and remote pieces. Second key factor is to determine
the security issue in WLAN.
| 1 | 0 | 0 | 0 | 0 | 0 |
Archetypes for Representing Data about the Brazilian Public Hospital Information System and Outpatient High Complexity Procedures System | The Brazilian Ministry of Health has selected the openEHR model as a standard
for electronic health record systems. This paper presents a set of archetypes
to represent the main data from the Brazilian Public Hospital Information
System and the High Complexity Procedures Module of the Brazilian public
Outpatient Health Information System. The archetypes from the public openEHR
Clinical Knowledge Manager (CKM), were examined in order to select archetypes
that could be used to represent the data of the above mentioned systems. For
several concepts, it was necessary to specialize the CKM archetypes, or design
new ones. A total of 22 archetypes were used: 8 new, 5 specialized and 9 reused
from CKM. This set of archetypes can be used not only for information exchange,
but also for generating a big anonymized dataset for testing openEHR-based
systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Global linear convergent algorithm to compute the minimum volume enclosing ellipsoid | The minimum volume enclosing ellipsoid (MVEE) problem is an optimization
problem in the basis of many practical problems. This paper describes some new
properties of this model and proposes a first-order oracle algorithm, the
Adjusted Coordinate Descent (ACD) algorithm, to address the MVEE problem. The
ACD algorithm is globally linear convergent and has an overwhelming advantage
over the other algorithms in cases where the dimension of the data is large.
Moreover, as a byproduct of the convergence property of the ACD algorithm, we
prove the global linear convergence of the Frank-Wolfe type algorithm
(illustrated by the case of Wolfe-Atwood's algorithm), which supports the
conjecture of Todd. Furthermore, we provide a new interpretation for the means
of choosing the coordinate axis of the Frank-Wolfe type algorithm from the
perspective of the smoothness of the coordinate axis, i.e., the algorithm
chooses the coordinate axis with the worst smoothness at each iteration. This
finding connects the first-order oracle algorithm and the linear optimization
oracle algorithm on the MVEE problem. The numerical tests support our
theoretical results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Asymptotic behavior of memristive circuits and combinatorial optimization | The interest in memristors has risen due to their possible application both
as memory units and as computational devices in combination with CMOS. This is
in part due to their nonlinear dynamics and a strong dependence on the circuit
topology. We provide evidence that also purely memristive circuits can be
employed for computational purposes. We show that a Lyapunov function,
polynomial in the internal memory parameters, exists for the case of DC
controlled memristors. Such Lyapunov function can be asymptotically mapped to
quadratic combinatorial optimization problems. This shows a direct parallel
between memristive circuits and the Hopfield-Little model. In the case of
Erdos-Renyi random circuits, we provide numerical evidence that the
distribution of the matrix elements of the couplings can be roughly
approximated by a Gaussian distribution, and that they scale with the inverse
square root of the number of elements. This provides an approximated but direct
connection to the physics of disordered system and, in particular, of mean
field spin glasses. Using this and the fact that the interaction is controlled
by a projector operator on the loop space of the circuit, we estimate the
number of stationary points of the Lyapunov function, and provide a scaling
formula as an upper bound in terms of the circuit topology only. In order to
put these ideas into practice, we provide an instance of optimization of the
Nikkei 225 dataset in the Markowitz framework, and show that it is competitive
compared to exponential annealing.
| 1 | 1 | 0 | 0 | 0 | 0 |
Contrasting information theoretic decompositions of modulatory and arithmetic interactions in neural information processing systems | Biological and artificial neural systems are composed of many local
processors, and their capabilities depend upon the transfer function that
relates each local processor's outputs to its inputs. This paper uses a recent
advance in the foundations of information theory to study the properties of
local processors that use contextual input to amplify or attenuate transmission
of information about their driving inputs. This advance enables the information
transmitted by processors with two distinct inputs to be decomposed into those
components unique to each input, that shared between the two inputs, and that
which depends on both though it is in neither, i.e. synergy. The decompositions
that we report here show that contextual modulation has information processing
properties that contrast with those of all four simple arithmetic operators,
that it can take various forms, and that the form used in our previous studies
of artificial neural nets composed of local processors with both driving and
contextual inputs is particularly well-suited to provide the distinctive
capabilities of contextual modulation under a wide range of conditions. We
argue that the decompositions reported here could be compared with those
obtained from empirical neurobiological and psychophysical data under
conditions thought to reflect contextual modulation. That would then shed new
light on the underlying processes involved. Finally, we suggest that such
decompositions could aid the design of context-sensitive machine learning
algorithms.
| 0 | 0 | 0 | 1 | 1 | 0 |
Recommendations for Marketing Campaigns in Telecommunication Business based on the footprint analysis | A major investment made by a telecom operator goes into the infrastructure
and its maintenance, while business revenues are proportional to how big and
good the customer base is. We present a data-driven analytic strategy based on
combinatorial optimization and analysis of historical data. The data cover
historical mobility of the users in one region of Sweden during a week.
Applying the proposed method to the case study, we have identified the optimal
proportion of geo-demographic segments in the customer base, developed a
functionality to assess the potential of a planned marketing campaign, and
explored the problem of an optimal number and types of the geo-demographic
segments to target through marketing campaigns. With the help of fuzzy logic,
the conclusions of data analysis are automatically translated into
comprehensible recommendations in a natural language.
| 1 | 0 | 0 | 0 | 0 | 0 |
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks | We consider the problem of detecting out-of-distribution images in neural
networks. We propose ODIN, a simple and effective method that does not require
any change to a pre-trained neural network. Our method is based on the
observation that using temperature scaling and adding small perturbations to
the input can separate the softmax score distributions between in- and
out-of-distribution images, allowing for more effective detection. We show in a
series of experiments that ODIN is compatible with diverse network
architectures and datasets. It consistently outperforms the baseline approach
by a large margin, establishing a new state-of-the-art performance on this
task. For example, ODIN reduces the false positive rate from the baseline 34.7%
to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is
95%.
| 1 | 0 | 0 | 1 | 0 | 0 |
Two-fermion Bethe-Salpeter Equation in Minkowski Space: the Nakanishi Way | The possibility of solving the Bethe-Salpeter Equation in Minkowski space,
even for fermionic systems, is becoming actual, through the applications of
well-known tools: i) the Nakanishi integral representation of the
Bethe-Salpeter amplitude and ii) the light-front projection onto the
null-plane. The theoretical background and some preliminary calculations are
illustrated, in order to show the potentiality and the wide range of
application of the method.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fractional Laplacians on the sphere, the Minakshisundaram zeta function and semigroups | In this paper we show novel underlying connections between fractional powers
of the Laplacian on the unit sphere and functions from analytic number theory
and differential geometry, like the Hurwitz zeta function and the
Minakshisundaram zeta function. Inspired by Minakshisundaram's ideas, we find a
precise pointwise description of $(-\Delta_{\mathbb{S}^{n-1}})^s u(x)$ in terms
of fractional powers of the Dirichlet-to-Neumann map on the sphere. The Poisson
kernel for the unit ball will be essential for this part of the analysis. On
the other hand, by using the heat semigroup on the sphere, additional pointwise
integro-differential formulas are obtained. Finally, we prove a
characterization with a local extension problem and the interior Harnack
inequality.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nef partitions for codimension 2 weighted complete intersections | We prove that a smooth well formed Fano weighted complete intersection of
codimension 2 has a nef partition. We discuss applications of this fact to
Mirror Symmetry. In particular we list all nef partitions for smooth well
formed Fano weighted complete intersections of dimensions 4 and 5 and present
weak Landau--Ginzburg models for them.
| 0 | 0 | 1 | 0 | 0 | 0 |
Integrating sentiment and social structure to determine preference alignments: The Irish Marriage Referendum | We examine the relationship between social structure and sentiment through
the analysis of a large collection of tweets about the Irish Marriage
Referendum of 2015. We obtain the sentiment of every tweet with the hashtags
#marref and #marriageref that was posted in the days leading to the referendum,
and construct networks to aggregate sentiment and use it to study the
interactions among users. Our results show that the sentiment of mention tweets
posted by users is correlated with the sentiment of received mentions, and
there are significantly more connections between users with similar sentiment
scores than among users with opposite scores in the mention and follower
networks. We combine the community structure of the two networks with the
activity level of the users and sentiment scores to find groups of users who
support voting `yes' or `no' in the referendum. There were numerous
conversations between users on opposing sides of the debate in the absence of
follower connections, which suggests that there were efforts by some users to
establish dialogue and debate across ideological divisions. Our analysis shows
that social structure can be integrated successfully with sentiment to analyse
and understand the disposition of social media users. These results have
potential applications in the integration of data and meta-data to study
opinion dynamics, public opinion modelling, and polling.
| 1 | 1 | 0 | 0 | 0 | 0 |
Tailoring symmetric metallic and magnetic edge states of nanoribbon in semiconductive monolayer PtS2 | Fabrication of atomic scale of metallic wire remains challenging. In present
work, a nanoribbon with two parallel symmetric metallic and magnetic edges was
designed from semiconductive monolayer PtS2 by employing first-principles
calculations based on density functional theory. Edge energy, bonding charge
density, band structure and simulated STM of possible edges states of PtS2 were
systematically studied. It was found that Pt-terminated edge nanoribbons were
the relatively stable metallic and magnetic edge tailored from a noble
transition metal dichalcogenides PtS2. The nanoribbon with two atomic metallic
wires may have promising application as nano power transmission lines, which at
least two lines are needed.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Ramsey theory of the universal homogeneous triangle-free graph | The universal homogeneous triangle-free graph, constructed by Henson and
denoted $\mathcal{H}_3$, is the triangle-free analogue of the Rado graph. While
the Ramsey theory of the Rado graph has been completely established, beginning
with Erdős-Hajnal-Posá and culminating in work of Sauer and
Laflamme-Sauer-Vuksanovic, the Ramsey theory of $\mathcal{H}_3$ had only
progressed to bounds for vertex colorings (Komjáth-Rödl) and edge
colorings (Sauer). This was due to a lack of broadscale techniques.
We solve this problem in general: For each finite triangle-free graph $G$,
there is a finite number $T(G)$ such that for any coloring of all copies of $G$
in $\mathcal{H}_3$ into finitely many colors, there is a subgraph of
$\mathcal{H}_3$ which is again universal homogeneous triangle-free in which the
coloring takes no more than $T(G)$ colors. This is the first such result for a
homogeneous structure omitting copies of some non-trivial finite structure. The
proof entails developments of new broadscale techniques, including a flexible
method for constructing trees which code $\mathcal{H}_3$ and the development of
their Ramsey theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
Restriction of representations of metaplectic $GL_{2}(F)$ to tori | Let $F$ be a non-Archimedean local field. We study the restriction of an
irreducible admissible genuine representations of the two fold metaplectic
cover $\widetilde{GL}_{2}(F)$ of $GL_{2}(F)$ to the inverse image in
$\widetilde{GL}_{2}(F)$ of a maximal torus in $GL_{2}(F)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Five-dimensional Perfect Simplices | Let $Q_n=[0,1]^n$ be the unit cube in ${\mathbb R}^n$, $n \in {\mathbb N}$.
For a nondegenerate simplex $S\subset{\mathbb R}^n$, consider the value
$\xi(S)=\min \{\sigma>0: Q_n\subset \sigma S\}$. Here $\sigma S$ is a
homothetic image of $S$ with homothety center at the center of gravity of $S$
and coefficient of homothety $\sigma$. Let us introduce the value $\xi_n=\min
\{\xi(S): S\subset Q_n\}$. We call $S$ a perfect simplex if $S\subset Q_n$ and
$Q_n$ is inscribed into the simplex $\xi_n S$. It is known that such simplices
exist for $n=1$ and $n=3$. The exact values of $\xi_n$ are known for $n=2$ and
in the case when there exist an Hadamard matrix of order $n+1$, in the latter
situation $\xi_n=n$. In this paper we show that $\xi_5=5$ and $\xi_9=9$. We
also describe infinite families of simplices $S\subset Q_n$ such that
$\xi(S)=\xi_n$ for $n=5,7,9$. The main result of the paper is the existence of
perfect simplices in ${\mathbb R}^5$.
Keywords: simplex, cube, homothety, axial diameter, Hadamard matrix
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantum Fluctuations in Mesoscopic Systems | Recent experimental results point to the existence of coherent quantum
phenomena in systems made of a large number of particles, despite the fact that
for many-body systems the presence of decoherence is hardly negligible and
emerging classicality is expected. This behaviour hinges on collective
observables, named quantum fluctuations, that retain a quantum character even
in the thermodynamic limit: they provide useful tools for studying properties
of many-body systems at the mesoscopic level, in between the quantum
microscopic scale and the classical macroscopic one. We hereby present the
general theory of quantum fluctuations in mesoscopic systems and study their
dynamics in a quantum open system setting, taking into account the unavoidable
effects of dissipation and noise induced by the external environment. As in the
case of microscopic systems, decoherence is not always the only dominating
effect at the mesoscopic scale: certain type of environments can provide means
for entangling collective fluctuations through a purely noisy mechanism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Big Data Model Simulation on a Graph Database for Surveillance in Wireless Multimedia Sensor Networks | Sensors are present in various forms all around the world such as mobile
phones, surveillance cameras, smart televisions, intelligent refrigerators and
blood pressure monitors. Usually, most of the sensors are a part of some other
system with similar sensors that compose a network. One of such networks is
composed of millions of sensors connect to the Internet which is called
Internet of things (IoT). With the advances in wireless communication
technologies, multimedia sensors and their networks are expected to be major
components in IoT. Many studies have already been done on wireless multimedia
sensor networks in diverse domains like fire detection, city surveillance,
early warning systems, etc. All those applications position sensor nodes and
collect their data for a long time period with real-time data flow, which is
considered as big data. Big data may be structured or unstructured and needs to
be stored for further processing and analyzing. Analyzing multimedia big data
is a challenging task requiring a high-level modeling to efficiently extract
valuable information/knowledge from data. In this study, we propose a big
database model based on graph database model for handling data generated by
wireless multimedia sensor networks. We introduce a simulator to generate
synthetic data and store and query big data using graph model as a big
database. For this purpose, we evaluate the well-known graph-based NoSQL
databases, Neo4j and OrientDB, and a relational database, MySQL.We have run a
number of query experiments on our implemented simulator to show that which
database system(s) for surveillance in wireless multimedia sensor networks is
efficient and scalable.
| 1 | 0 | 0 | 0 | 0 | 0 |
EmbedJoin: Efficient Edit Similarity Joins via Embeddings | We study the problem of edit similarity joins, where given a set of strings
and a threshold value $K$, we want to output all pairs of strings whose edit
distances are at most $K$. Edit similarity join is a fundamental problem in
data cleaning/integration, bioinformatics, collaborative filtering and natural
language processing, and has been identified as a primitive operator for
database systems. This problem has been studied extensively in the literature.
However, we have observed that all the existing algorithms fall short on long
strings and large distance thresholds.
In this paper we propose an algorithm named EmbedJoin which scales very well
with string length and distance threshold. Our algorithm is built on the recent
advance of metric embeddings for edit distance, and is very different from all
of the previous approaches. We demonstrate via an extensive set of experiments
that EmbedJoin significantly outperforms the previous best algorithms on long
strings and large distance thresholds.
| 1 | 0 | 0 | 0 | 0 | 0 |
Construction of curve pairs and their applications | In this study, we introduce a new approach to curve pairs by using integral
curves. We consider the direction curve and donor curve to study curve couples
such as involute-evolute curves, Mannheim partner curves and Bertrand partner
curves. We obtain new methods to construct partner curves of a unit speed curve
and give some applications related to helices, slant helices and plane curves.
| 0 | 0 | 1 | 0 | 0 | 0 |
Solution of linear ill-posed problems by model selection and aggregation | We consider a general statistical linear inverse problem, where the solution
is represented via a known (possibly overcomplete) dictionary that allows its
sparse representation. We propose two different approaches. A model selection
estimator selects a single model by minimizing the penalized empirical risk
over all possible models. By contrast with direct problems, the penalty depends
on the model itself rather than on its size only as for complexity penalties. A
Q-aggregate estimator averages over the entire collection of estimators with
properly chosen weights. Under mild conditions on the dictionary, we establish
oracle inequalities both with high probability and in expectation for the two
estimators. Moreover, for the latter estimator these inequalities are sharp.
The proposed procedures are implemented numerically and their performance is
assessed by a simulation study.
| 0 | 0 | 1 | 0 | 0 | 0 |
Simulation studies for dielectric wakefield programme at CLARA facility | Short, high charge electron bunches can drive high magnitude electric fields
in dielectric lined structures. The interaction of the electron bunch with this
field has several applications including high gradient dielectric wakefield
acceleration (DWA) and passive beam manipulation. The simulations presented
provide a prelude to the commencement of an experimental DWA programme at the
CLARA accelerator at Daresbury Laboratory. The key goals of this program are:
tunable generation of THz radiation, understanding of the impact of transverse
wakes, and design of a dechirper for the CLARA FEL. Computations of
longitudinal and transverse phase space evolution were made with Impact-T and
VSim to support both of these goals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Long-range dynamical magnetic order and spin tunneling in the cooperative paramagnetic states of the pyrochlore analogous spinel antiferromagnets CdYb2X4 (X = S, Se) | Magnetic systems with spins sitting on a lattice of corner sharing regular
tetrahedra have been particularly prolific for the discovery of new magnetic
states for the last two decades. The pyrochlore compounds have offered the
playground for these studies, while little attention has been comparatively
devoted to other compounds where the rare earth R occupies the same
sub-lattice, e.g. the spinel chalcogenides CdR2X4 (X = S, Se). Here we report
measurements performed on powder samples of this series with R = Yb using
specific heat, magnetic susceptibility, neutron diffraction and
muon-spin-relaxation measurements. The two compounds are found to be
magnetically similar. They long-range order into structures described by the
\Gamma_5 irreducible representation. The magnitude of the magnetic moment at
low temperature is 0.77 (1) and 0.62 (1) mu_B for X = S and Se, respectively.
Persistent spin dynamics is present in the ordered states. The spontaneous
field at the muon site is anomalously small, suggesting magnetic moment
fragmentation. A double spin-flip tunneling relaxation mechanism is suggested
in the cooperative paramagnetic state up to 10 K. The magnetic space groups
into which magnetic moments of systems of corner-sharing regular tetrahedra
order are provided for a number of insulating compounds characterized by null
propagation wavevectors.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Frobenius morphism in invariant theory | Let $R$ be the homogeneous coordinate ring of the Grassmannian
$\mathbb{G}=\operatorname{Gr}(2,n)$ defined over an algebraically closed field
of characteristic $p>0$. In this paper we give a completely characteristic free
description of the decomposition of $R$, considered as a graded $R^p$-module,
into indecomposables ("Frobenius summands"). As a corollary we obtain a similar
decomposition for the Frobenius pushforward of the structure sheaf of
$\mathbb{G}$ and we obtain in particular that this pushforward is almost never
a tilting bundle. On the other hand we show that $R$ provides a "noncommutative
resolution" for $R^p$ when $p\ge n-2$, generalizing a result known to be true
for toric varieties.
In both the invariant theory and the geometric setting we observe that if the
characteristic is not too small the Frobenius summands do not depend on the
characteristic in a suitable sense. In the geometric setting this is an
explicit version of a general result by Bezrukavnikov and Mirković on
Frobenius decompositions for partial flag varieities. We are hopeful that it is
an instance of a more general "$p$-uniformity" principle.
| 0 | 0 | 1 | 0 | 0 | 0 |
A General Probabilistic Approach for Quantitative Assessment of LES Combustion Models | The Wasserstein metric is introduced as a probabilistic method to enable
quantitative evaluations of LES combustion models. The Wasserstein metric can
directly be evaluated from scatter data or statistical results using
probabilistic reconstruction against experimental data. The method is derived
and generalized for turbulent reacting flows, and applied to validation tests
involving the Sydney piloted jet flame. It is shown that the Wasserstein metric
is an effective validation tool that extends to multiple scalar quantities,
providing an objective and quantitative evaluation of model deficiencies and
boundary conditions on the simulation accuracy. Several test cases are
considered, beginning with a comparison of mixture-fraction results, and the
subsequent extension to reactive scalars, including temperature and species
mass fractions of \ce{CO} and \ce{CO2}. To demonstrate the versatility of the
proposed method in application to multiple datasets, the Wasserstein metric is
applied to a series of different simulations that were contributed to the
TNF-workshop. Analysis of the results allowed to identify competing
contributions to model deviations, arising from uncertainties in the boundary
conditions and model deficiencies. These applications demonstrate that the
Wasserstein metric constitutes an easily applicable mathematical tool that
reduce multiscalar combustion data and large datasets into a scalar-valued
quantitative measure.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sentiment Analysis by Joint Learning of Word Embeddings and Classifier | Word embeddings are representations of individual words of a text document in
a vector space and they are often use- ful for performing natural language pro-
cessing tasks. Current state of the art al- gorithms for learning word
embeddings learn vector representations from large corpora of text documents in
an unsu- pervised fashion. This paper introduces SWESA (Supervised Word
Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via
word embeddings. SWESA leverages document label infor- mation to learn vector
representations of words from a modest corpus of text doc- uments by solving an
optimization prob- lem that minimizes a cost function with respect to both word
embeddings as well as classification accuracy. Analysis re- veals that SWESA
provides an efficient way of estimating the dimension of the word embeddings
that are to be learned. Experiments on several real world data sets show that
SWESA has superior per- formance when compared to previously suggested
approaches to word embeddings and sentiment analysis tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Normal-state Properties of a Unitary Bose-Fermi Mixture: A Combined Strong-coupling Approach with Universal Thermodynamics | We theoretically investigate normal-state properties of a unitary Bose-Fermi
mixture. Including strong hetero-pairing fluctuations, we evaluate the Bose and
Fermi chemical potential, internal energy, pressure, entropy, as well as
specific heat at constant volume $C_V$, within the framework of a combined
strong-coupling theory with exact thermodynamic identities. We show that
hetero-pairing fluctuations at the unitarity cause non-monotonic temperature
dependence of $C_V$, being qualitatively different from the monotonic behavior
of this quantity in the weak- and strong-coupling limit. On the other hand,
such an anomalous behavior is not seen in the other quantities. Our results
indicate that the specific heat $C_V$, which has recently become observable in
cold atom physics, is a useful quantity for understanding strong-coupling
aspects of this quantum system.
| 0 | 1 | 0 | 0 | 0 | 0 |
A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop | The goal of Machine Learning to automatically learn from data, extract
knowledge and to make decisions without any human intervention. Such automatic
(aML) approaches show impressive success. Recent results even demonstrate
intriguingly that deep learning applied for automatic classification of skin
lesions is on par with the performance of dermatologists, yet outperforms the
average. As human perception is inherently limited, such approaches can
discover patterns, e.g. that two objects are similar, in arbitrarily
high-dimensional spaces what no human is able to do. Humans can deal only with
limited amounts of data, whilst big data is beneficial for aML; however, in
health informatics, we are often confronted with a small number of data sets,
where aML suffer of insufficient training samples and many problems are
computationally hard. Here, interactive machine learning (iML) may be of help,
where a human-in-the-loop contributes to reduce the complexity of NP-hard
problems. A further motivation for iML is that standard black-box approaches
lack transparency, hence do not foster trust and acceptance of ML among
end-users. Rising legal and privacy aspects, e.g. with the new European General
Data Protection Regulations, make black-box approaches difficult to use,
because they often are not able to explain why a decision has been made. In
this paper, we present some experiments to demonstrate the effectiveness of the
human-in-the-loop approach, particularly in opening the black-box to a
glass-box and thus enabling a human directly to interact with an learning
algorithm. We selected the Ant Colony Optimization framework, and applied it on
the Traveling Salesman Problem, which is a good example, due to its relevance
for health informatics, e.g. for the study of protein folding. From studies of
how humans extract so much from so little data, fundamental ML-research also
may benefit.
| 1 | 0 | 0 | 1 | 0 | 0 |
Verification Studies for the Noh Problem using Non-ideal Equations of State and Finite Strength Shocks | The Noh verification test problem is extended beyond the commonly studied
ideal gamma-law gas to more realistic equations of state (EOSs) including the
stiff gas, the Noble-Abel gas, and the Carnahan-Starling EOS for hard-sphere
fluids. Self-similarity methods are used to solve the Euler compressible flow
equations, which in combination with the Rankine-Hugoniot jump conditions
provide a tractable general solution. This solution can be applied to fluids
with EOSs that meet criterion such as it being a convex function and having a
corresponding bulk modulus. For the planar case the solution can be applied to
shocks of arbitrary strength, but for cylindrical and spherical geometries it
is required that the analysis be restricted to strong shocks. The exact
solutions are used to perform a variety of quantitative code verification
studies of the Los Alamos National Laboratory Lagrangian hydrocode FLAG.
| 1 | 1 | 0 | 0 | 0 | 0 |
Motivic zeta functions and infinite cyclic covers | We associate with an infinite cyclic cover of a punctured neighborhood of a
simple normal crossing divisor on a complex quasi-projective manifold (assuming
certain finiteness conditions are satisfied) a rational function in $K_0({\rm
Var}^{\hat \mu}_{\mathbb{C}})[\mathbb{L}^{-1}]$, which we call {\it motivic
infinite cyclic zeta function}, and show its birational invariance. Our
construction is a natural extension of the notion of {\it motivic infinite
cyclic covers} introduced by the authors, and as such, it generalizes the
Denef-Loeser motivic Milnor zeta function of a complex hypersurface singularity
germ.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.