title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Statistical Inference with Local Optima | We study the statistical properties of an estimator derived by applying a
gradient ascent method with multiple initializations to a multi-modal
likelihood function. We derive the population quantity that is the target of
this estimator and study the properties of confidence intervals (CIs)
constructed from asymptotic normality and the bootstrap approach. In
particular, we analyze the coverage deficiency due to finite number of random
initializations. We also investigate the CIs by inverting the likelihood ratio
test, the score test, and the Wald test, and we show that the resulting CIs may
be very different. We provide a summary of the uncertainties that we need to
consider while making inference about the population. Note that we do not
provide a solution to the problem of multiple local maxima; instead, our goal
is to investigate the effect from local maxima on the behavior of our
estimator. In addition, we analyze the performance of the EM algorithm under
random initializations and derive the coverage of a CI with a finite number of
initializations. Finally, we extend our analysis to a nonparametric mode
hunting problem.
| 0 | 0 | 0 | 1 | 0 | 0 |
Discovery of Shifting Patterns in Sequence Classification | In this paper, we investigate the multi-variate sequence classification
problem from a multi-instance learning perspective. Real-world sequential data
commonly show discriminative patterns only at specific time periods. For
instance, we can identify a cropland during its growing season, but it looks
similar to a barren land after harvest or before planting. Besides, even within
the same class, the discriminative patterns can appear in different periods of
sequential data. Due to such property, these discriminative patterns are also
referred to as shifting patterns. The shifting patterns in sequential data
severely degrade the performance of traditional classification methods without
sufficient training data.
We propose a novel sequence classification method by automatically mining
shifting patterns from multi-variate sequence. The method employs a
multi-instance learning approach to detect shifting patterns while also
modeling temporal relationships within each multi-instance bag by an LSTM model
to further improve the classification performance. We extensively evaluate our
method on two real-world applications - cropland mapping and affective state
recognition. The experiments demonstrate the superiority of our proposed method
in sequence classification performance and in detecting discriminative shifting
patterns.
| 1 | 0 | 0 | 1 | 0 | 0 |
Virtual Network Migration on the GENI Wide-Area SDN-Enabled Infrastructure | A virtual network (VN) contains a collection of virtual nodes and links
assigned to underlying physical resources in a network substrate. VN migration
is the process of remapping a VN's logical topology to a new set of physical
resources to provide failure recovery, energy savings, or defense against
attack. Providing VN migration that is transparent to running applications is a
significant challenge. Efficient migration mechanisms are highly dependent on
the technology deployed in the physical substrate. Prior work has considered
migration in data centers and in the PlanetLab infrastructure. However, there
has been little effort targeting an SDN-enabled wide-area networking
environment - an important building block of future networking infrastructure.
In this work, we are interested in the design, implementation and evaluation of
VN migration in GENI as a working example of such a future network. We identify
and propose techniques to address key challenges: the dynamic allocation of
resources during migration, managing hosts connected to the VN, and flow table
migration sequences to minimize packet loss. We find that GENI's virtualization
architecture makes transparent and efficient migration challenging. We suggest
alternatives that might be adopted in GENI and are worthy of adoption by
virtual network providers to facilitate migration.
| 1 | 0 | 0 | 0 | 0 | 0 |
Communication-efficient Algorithm for Distributed Sparse Learning via Two-way Truncation | We propose a communicationally and computationally efficient algorithm for
high-dimensional distributed sparse learning. At each iteration, local machines
compute the gradient on local data and the master machine solves one shifted
$l_1$ regularized minimization problem. The communication cost is reduced from
constant times of the dimension number for the state-of-the-art algorithm to
constant times of the sparsity number via Two-way Truncation procedure.
Theoretically, we prove that the estimation error of the proposed algorithm
decreases exponentially and matches that of the centralized method under mild
assumptions. Extensive experiments on both simulated data and real data verify
that the proposed algorithm is efficient and has performance comparable with
the centralized method on solving high-dimensional sparse learning problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Bayesian Estimation for the Fractional Order of the Differential Equation that Models Transport in Unconventional Hydrocarbon Reservoirs | The extraction of natural gas from the earth has been shown to be governed by
differential equations concerning flow through a porous material. Recently,
models such as fractional differential equations have been developed to model
this phenomenon. One key issue with these models is estimating the fraction of
the differential equation. Traditional methods such as maximum likelihood,
least squares and even method of moments are not available to estimate this
parameter as traditional calculus methods do not apply. We develop a Bayesian
approach to estimate the fraction of the order of the differential equation
that models transport in unconventional hydrocarbon reservoirs. In this paper,
we use this approach to adequately quantify the uncertainties associated with
the error and predictions. A simulation study is presented as well to assess
the utility of the modeling approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Numerical simulations of magnetic billiards in a convex domain in $\mathbb{R}^2$ | We present numerical simulations of magnetic billiards inside a convex domain
in the plane.
| 0 | 1 | 1 | 0 | 0 | 0 |
On Lasso refitting strategies | A well-know drawback of l_1-penalized estimators is the systematic shrinkage
of the large coefficients towards zero. A simple remedy is to treat Lasso as a
model-selection procedure and to perform a second refitting step on the
selected support. In this work we formalize the notion of refitting and provide
oracle bounds for arbitrary refitting procedures of the Lasso solution. One of
the most widely used refitting techniques which is based on Least-Squares may
bring a problem of interpretability, since the signs of the refitted estimator
might be flipped with respect to the original estimator. This problem arises
from the fact that the Least-Squares refitting considers only the support of
the Lasso solution, avoiding any information about signs or amplitudes. To this
end we define a sign consistent refitting as an arbitrary refitting procedure,
preserving the signs of the first step Lasso solution and provide Oracle
inequalities for such estimators. Finally, we consider special refitting
strategies: Bregman Lasso and Boosted Lasso. Bregman Lasso has a fruitful
property to converge to the Sign-Least-Squares refitting (Least-Squares with
sign constraints), which provides with greater interpretability. We
additionally study the Bregman Lasso refitting in the case of orthogonal
design, providing with simple intuition behind the proposed method. Boosted
Lasso, in contrast, considers information about magnitudes of the first Lasso
step and allows to develop better oracle rates for prediction. Finally, we
conduct an extensive numerical study to show advantages of one approach over
others in different synthetic and semi-real scenarios.
| 0 | 0 | 1 | 1 | 0 | 0 |
Supercharacters and the discrete Fourier, cosine, and sine transforms | Using supercharacter theory, we identify the matrices that are diagonalized
by the discrete cosine and discrete sine transforms, respectively. Our method
affords a combinatorial interpretation for the matrix entries.
| 0 | 0 | 1 | 0 | 0 | 0 |
A connection between MAX $κ$-CUT and the inhomogeneous Potts spin glass in the large degree limit | We study the asymptotic behavior of the Max $\kappa$-cut on a family of
sparse, inhomogeneous random graphs. In the large degree limit, the leading
term is a variational problem, involving the ground state of a constrained
inhomogeneous Potts spin glass. We derive a Parisi type formula for the free
energy of this model, with possible constraints on the proportions, and derive
the limiting ground state energy by a suitable zero temperature limit.
| 0 | 0 | 1 | 0 | 0 | 0 |
Kernel method for persistence diagrams via kernel embedding and weight factor | Topological data analysis is an emerging mathematical concept for
characterizing shapes in multi-scale data. In this field, persistence diagrams
are widely used as a descriptor of the input data, and can distinguish robust
and noisy topological properties. Nowadays, it is highly desired to develop a
statistical framework on persistence diagrams to deal with practical data. This
paper proposes a kernel method on persistence diagrams. A theoretical
contribution of our method is that the proposed kernel allows one to control
the effect of persistence, and, if necessary, noisy topological properties can
be discounted in data analysis. Furthermore, the method provides a fast
approximation technique. The method is applied into several problems including
practical data in physics, and the results show the advantage compared to the
existing kernel method on persistence diagrams.
| 0 | 0 | 1 | 1 | 0 | 0 |
Time Complexity of Constraint Satisfaction via Universal Algebra | The exponential-time hypothesis (ETH) states that 3-SAT is not solvable in
subexponential time, i.e. not solvable in O(c^n) time for arbitrary c > 1,
where n denotes the number of variables. Problems like k-SAT can be viewed as
special cases of the constraint satisfaction problem (CSP), which is the
problem of determining whether a set of constraints is satisfiable. In this
paper we study thef worst-case time complexity of NP-complete CSPs. Our main
interest is in the CSP problem parameterized by a constraint language Gamma
(CSP(Gamma)), and how the choice of Gamma affects the time complexity. It is
believed that CSP(Gamma) is either tractable or NP-complete, and the algebraic
CSP dichotomy conjecture gives a sharp delineation of these two classes based
on algebraic properties of constraint languages. Under this conjecture and the
ETH, we first rule out the existence of subexponential algorithms for
finite-domain NP-complete CSP(Gamma) problems. This result also extends to
certain infinite-domain CSPs and structurally restricted CSP(Gamma) problems.
We then begin a study of the complexity of NP-complete CSPs where one is
allowed to arbitrarily restrict the values of individual variables, which is a
very well-studied subclass of CSPs. For such CSPs with finite domain D, we
identify a relation SD such that (1) CSP({SD}) is NP-complete and (2) if
CSP(Gamma) over D is NP-complete and solvable in O(c^n) time, then CSP({SD}) is
solvable in O(c^n) time, too. Hence, the time complexity of CSP({SD}) is a
lower bound for all CSPs of this particular kind. We also prove that the
complexity of CSP({SD}) is decreasing when |D| increases, unless the ETH is
false. This implies, for instance, that for every c>1 there exists a
finite-domain Gamma such that CSP(Gamma) is NP-complete and solvable in O(c^n)
time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Novel Structured Low-rank algorithm to recover spatially smooth exponential image time series | We propose a structured low rank matrix completion algorithm to recover a
time series of images consisting of linear combination of exponential
parameters at every pixel, from under-sampled Fourier measurements. The spatial
smoothness of these parameters is exploited along with the exponential
structure of the time series at every pixel, to derive an annihilation relation
in the $k-t$ domain. This annihilation relation translates into a structured
low rank matrix formed from the $k-t$ samples. We demonstrate the algorithm in
the parameter mapping setting and show significant improvement over state of
the art methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Vector bundles and modular forms for Fuchsian groups of genus zero | This article lays the foundations for the study of modular forms transforming
with respect to representations of Fuchsian groups of genus zero. More
precisely, we define geometrically weighted graded modules of such modular
forms, where the graded structure comes from twisting with all isomorphism
classes of line bundles on the corresponding compactified modular curve, and we
study their structure by relating it to the structure of vector bundles over
orbifold curves of genus zero. We prove that these modules are free whenever
the Fuchsian group has at most two elliptic points. For three or more elliptic
points, we give explicit constructions of indecomposable vector bundles of rank
two over modular orbifold curves, which give rise to non-free modules of
geometrically weighted modular forms.
| 0 | 0 | 1 | 0 | 0 | 0 |
Indoor UAV scheduling with Restful Task Assignment Algorithm | Research in UAV scheduling has obtained an emerging interest from scientists
in the optimization field. When the scheduling itself has established a strong
root since the 19th century, works on UAV scheduling in indoor environment has
come forth in the latest decade. Several works on scheduling UAV operations in
indoor (two and three dimensional) and outdoor environments are reported. In
this paper, a further study on UAV scheduling in three dimensional indoor
environment is investigated. Dealing with indoor environment\textemdash where
humans, UAVs, and other elements or infrastructures are likely to coexist in
the same space\textemdash draws attention towards the safety of the operations.
In relation to the battery level, a preserved battery level leads to safer
operations, promoting the UAV to have a decent remaining power level. A
methodology which consists of a heuristic approach based on Restful Task
Assignment Algorithm, incorporated with Particle Swarm Optimization Algorithm,
is proposed. The motivation is to preserve the battery level throughout the
operations, which promotes less possibility in having failed UAVs on duty. This
methodology is tested with 54 benchmark datasets stressing on 4 different
aspects: geographical distance, number of tasks, number of predecessors, and
slack time. The test results and their characteristics in regard to the
proposed methodology are discussed and presented.
| 1 | 0 | 0 | 0 | 0 | 0 |
Magnetic behavior of new compounds, Gd3RuSn6 and Tb3RuSn6 | We report temperature (T) dependence of dc magnetization, electrical
resistivity (rho(T)), and heat-capacity of rare-earth (R) compounds, Gd3RuSn6
and Tb3RuSn6, which are found to crystallize in the Yb3CoSn6-type orthorhombic
structure (space group: Cmcm). The results establish that there is an onset of
antiferromagnetic order near (T_N) 19 and 25 K respectively. In addition, we
find that there is another magnetic transition for both the cases around 14 and
17 K respectively. In the case of the Gd compound, the spin-scattering
contribution to rho is found to increase below 75 K as the material is cooled
towards T_N, thereby resulting in a minimum in the plot of rho(T) unexpected
for Gd based systems. Isothermal magnetization at 1.8 K reveals an upward
curvature around 50 kOe. Isothermal magnetoresistance plots show interesting
anomalies in the magnetically ordered state. There are sign reversals in the
plot of isothermal entropy change versus T in the magnetically ordered state,
indicating subtle changes in the spin reorientation with T. The results reveal
that these compounds exhibit interesting magnetic properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stochastic Assume-Guarantee Contracts for Cyber-Physical System Design Under Probabilistic Requirements | We develop an assume-guarantee contract framework for the design of
cyber-physical systems, modeled as closed-loop control systems, under
probabilistic requirements. We use a variant of signal temporal logic, namely,
Stochastic Signal Temporal Logic (StSTL) to specify system behaviors as well as
contract assumptions and guarantees, thus enabling automatic reasoning about
requirements of stochastic systems. Given a stochastic linear system
representation and a set of requirements captured by bounded StSTL contracts,
we propose algorithms that can check contract compatibility, consistency, and
refinement, and generate a controller to guarantee that a contract is
satisfied, following a stochastic model predictive control approach. Our
algorithms leverage encodings of the verification and control synthesis tasks
into mixed integer optimization problems, and conservative approximations of
probabilistic constraints that produce both sound and tractable problem
formulations. We illustrate the effectiveness of our approach on a few
examples, including the design of embedded controllers for aircraft power
distribution networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ergodic Exploration of Distributed Information | This paper presents an active search trajectory synthesis technique for
autonomous mobile robots with nonlinear measurements and dynamics. The
presented approach uses the ergodicity of a planned trajectory with respect to
an expected information density map to close the loop during search. The
ergodic control algorithm does not rely on discretization of the search or
action spaces, and is well posed for coverage with respect to the expected
information density whether the information is diffuse or localized, thus
trading off between exploration and exploitation in a single objective
function. As a demonstration, we use a robotic electrolocation platform to
estimate location and size parameters describing static targets in an
underwater environment. Our results demonstrate that the ergodic exploration of
distributed information (EEDI) algorithm outperforms commonly used
information-oriented controllers, particularly when distractions are present.
| 1 | 0 | 0 | 0 | 0 | 0 |
Manipulating magnetism by ultrafast control of the exchange interaction | In recent years, the optical control of exchange interactions has emerged as
an exciting new direction in the study of the ultrafast optical control of
magnetic order. Here we review recent theoretical works on antiferromagnetic
systems, devoted to i) simulating the ultrafast control of exchange
interactions, ii) modeling the strongly nonequilibrium response of the magnetic
order and iii) the relation with relevant experimental works developed in
parallel. In addition to the excitation of spin precession, we discuss examples
of rapid cooling and the control of ultrafast coherent longitudinal spin
dynamics in response to femtosecond optically induced perturbations of exchange
interactions. These elucidate the potential for exploiting the control of
exchange interactions to find new scenarios for both faster and more
energy-efficient manipulation of magnetism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Visual Explanations for Convolutional Neural Networks via Input Resampling | The predictive power of neural networks often costs model interpretability.
Several techniques have been developed for explaining model outputs in terms of
input features; however, it is difficult to translate such interpretations into
actionable insight. Here, we propose a framework to analyze predictions in
terms of the model's internal features by inspecting information flow through
the network. Given a trained network and a test image, we select neurons by two
metrics, both measured over a set of images created by perturbations to the
input image: (1) magnitude of the correlation between the neuron activation and
the network output and (2) precision of the neuron activation. We show that the
former metric selects neurons that exert large influence over the network
output while the latter metric selects neurons that activate on generalizable
features. By comparing the sets of neurons selected by these two metrics, our
framework suggests a way to investigate the internal attention mechanisms of
convolutional neural networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
FBG-Based Control of a Continuum Manipulator Interacting With Obstacles | Tracking and controlling the shape of continuum dexterous manipulators (CDM)
in constraint environments is a challenging task. The imposed constraints and
interaction with unknown obstacles may conform the CDM's shape and therefore
demands for shape sensing methods which do not rely on direct line of sight. To
address these issues, we integrate a novel Fiber Bragg Grating (FBG) shape
sensing unit into a CDM, reconstruct the shape in real-time, and develop an
optimization-based control algorithm using FBG tip position feedback. The CDM
is designed for less-invasive treatment of osteolysis (bone degradation). To
evaluate the performance of the feedback control algorithm when the CDM
interacts with obstacles, we perform a set of experiments similar to the real
scenario of the CDM interaction with soft and hard lesions during the treatment
of osteolysis. In addition, we propose methods for identification of the CDM
collisions with soft or hard obstacles using the jacobian information. Results
demonstrate successful control of the CDM tip based on the FBG feedback and
indicate repeatability and robustness of the proposed method when interacting
with unknown obstacles.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generalized variational inequalities for maximal monotone operators | In this paper we present some new results on the existence of solutions of
generalized variational inequalities in real reflexive Banach spaces with
Fréchet differentiable norms. Moreover, we also give some theorems about the
structure of solution sets. The results obtained in this paper improve and
extend the ones announced by Fang and Peterson [1] to infinite dimensional
spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations | The increasing accuracy of automatic chord estimation systems, the
availability of vast amounts of heterogeneous reference annotations, and
insights from annotator subjectivity research make chord label personalization
increasingly important. Nevertheless, automatic chord estimation systems are
historically exclusively trained and evaluated on a single reference
annotation. We introduce a first approach to automatic chord label
personalization by modeling subjectivity through deep learning of a harmonic
interval-based chord label representation. After integrating these
representations from multiple annotators, we can accurately personalize chord
labels for individual annotators from a single model and the annotators' chord
label vocabulary. Furthermore, we show that chord personalization using
multiple reference annotations outperforms using a single reference annotation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Classification of Questions and Learning Outcome Statements (LOS) Into Blooms Taxonomy (BT) By Similarity Measurements Towards Extracting Of Learning Outcome from Learning Material | Blooms Taxonomy (BT) have been used to classify the objectives of learning
outcome by dividing the learning into three different domains; the cognitive
domain, the effective domain and the psychomotor domain. In this paper, we are
introducing a new approach to classify the questions and learning outcome
statements (LOS) into Blooms taxonomy (BT) and to verify BT verb lists, which
are being cited and used by academicians to write questions and (LOS). An
experiment was designed to investigate the semantic relationship between the
action verbs used in both questions and LOS to obtain more accurate
classification of the levels of BT. A sample of 775 different action verbs
collected from different universities allows us to measure an accurate and
clear-cut cognitive level for the action verb. It is worth mentioning that
natural language processing techniques were used to develop our rules as to
induce the questions into chunks in order to extract the action verbs. Our
proposed solution was able to classify the action verb into a precise level of
the cognitive domain. We, on our side, have tested and evaluated our proposed
solution using confusion matrix. The results of evaluation tests yielded 97%
for the macro average of precision and 90% for F1. Thus, the outcome of the
research suggests that it is crucial to analyse and verify the action verbs
cited and used by academicians to write LOS and classify their questions based
on blooms taxonomy in order to obtain a definite and more accurate
classification.
| 1 | 0 | 0 | 0 | 0 | 0 |
Symmetric Rank Covariances: a Generalised Framework for Nonparametric Measures of Dependence | The need to test whether two random vectors are independent has spawned a
large number of competing measures of dependence. We are interested in
nonparametric measures that are invariant under strictly increasing
transformations, such as Kendall's tau, Hoeffding's D, and the more recently
discovered Bergsma--Dassios sign covariance. Each of these measures exhibits
symmetries that are not readily apparent from their definitions. Making these
symmetries explicit, we define a new class of multivariate nonparametric
measures of dependence that we refer to as Symmetric Rank Covariances. This new
class generalises all of the above measures and leads naturally to multivariate
extensions of the Bergsma--Dassios sign covariance. Symmetric Rank Covariances
may be estimated unbiasedly using U-statistics for which we prove results on
computational efficiency and large-sample behavior. The algorithms we develop
for their computation include, to the best of our knowledge, the first
efficient algorithms for the well-known Hoeffding's D statistic in the
multivariate setting.
| 0 | 0 | 1 | 1 | 0 | 0 |
Linear magnetoresistance in the charge density wave state of quasi-two-dimensional rare-earth tritellurides | We report measurements of the magnetoresistance in the charge density wave
(CDW) state of rare-earth tritellurides, namely TbTe$_3$ and HoTe$_3$. The
magnetic field dependence of magnetoresistance exhibits a temperature dependent
crossover between a conventional quadratic law at high $T$ and low $B$ and an
unusual linear dependence at low $T$ and high $B$. We present a quite general
model to explain the linear magnetoresistance taking into account the strong
scattering of quasiparticles on CDW fluctuations in the vicinity of "hot spots"
of the Fermi surface (FS) where the FS reconstruction is the strongest.
| 0 | 1 | 0 | 0 | 0 | 0 |
Two classes of number fields with a non-principal Euclidean ideal | This paper introduces two classes of totally real quartic number fields, one
of biquadratic extensions and one of cyclic extensions, each of which has a
non-principal Euclidean ideal. It generalizes techniques of Graves used to
prove that the number field $\mathbb{Q}(\sqrt{2},\sqrt{35})$ has a
non-principal Euclidean ideal.
| 0 | 0 | 1 | 0 | 0 | 0 |
Diamond-colored distributive lattices, move-minimizing games, and fundamental Weyl symmetric functions: The type $\mathsf{A}$ case | We present some elementary but foundational results concerning
diamond-colored modular and distributive lattices and connect these structures
to certain one-player combinatorial "move-minimizing games," in particular, a
so-called "domino game." The objective of this game is to find, if possible,
the least number of "domino moves" to get from one partition to another, where
a domino move is, with one exception, the addition or removal of a
domino-shaped pair of tiles. We solve this domino game by demonstrating the
somewhat surprising fact that the associated "game graphs" coincide with a
well-known family of diamond-colored distributive lattices which shall be
referred to as the "type $\mathsf{A}$ fundamental lattices." These lattices
arise as supporting graphs for the fundamental representations of the special
linear Lie algebras and as splitting posets for type $\mathsf{A}$ fundamental
symmetric functions, connections which are further explored in sequel papers
for types $\mathsf{A}$, $\mathsf{C}$, and $\mathsf{B}$. In this paper, this
connection affords a solution to the proposed domino game as well as new
descriptions of the type $\mathsf{A}$ fundamental lattices.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hidden Truncation Hyperbolic Distributions, Finite Mixtures Thereof, and Their Application for Clustering | A hidden truncation hyperbolic (HTH) distribution is introduced and finite
mixtures thereof are applied for clustering. A stochastic representation of the
HTH distribution is given and a density is derived. A hierarchical
representation is described, which aids in parameter estimation. Finite
mixtures of HTH distributions are presented and their identifiability is
proved. The convexity of the HTH distribution is discussed, which is important
in clustering applications, and some theoretical results in this direction are
presented. The relationship between the HTH distribution and other skewed
distributions in the literature is discussed. Illustrations are provided ---
both of the HTH distribution and application of finite mixtures thereof for
clustering.
| 0 | 0 | 0 | 1 | 0 | 0 |
No evidence for a significant AGN contribution to cosmic hydrogen reionization | We reinvestigate a claimed sample of 22 X-ray detected active galactic nuclei
(AGN) at redshifts z > 4, which has reignited the debate as to whether young
galaxies or AGN reionized the Universe. These sources lie within the
GOODS-S/CANDELS field, and we examine both the robustness of the claimed X-ray
detections (within the Chandra 4Ms imaging) and perform an independent analysis
of the photometric redshifts of the optical/infrared counterparts. We confirm
the reality of only 15 of the 22 reported X-ray detections, and moreover find
that only 12 of the 22 optical/infrared counterpart galaxies actually lie
robustly at z > 4. Combining these results we find convincing evidence for only
7 X-ray AGN at z > 4 in the GOODS-S field, of which only one lies at z > 5. We
recalculate the evolving far-UV (1500 Angstrom) luminosity density produced by
AGN at high redshift, and find that it declines rapidly from z = 4 to z = 6, in
agreement with several other recent studies of the evolving AGN luminosity
function. The associated rapid decline in inferred hydrogen-ionizing emissivity
contributed by AGN falls an order-of-magnitude short of the level required to
maintain hydrogen ionization at z ~ 6. We conclude that all available evidence
continues to favour a scenario in which young galaxies reionized the Universe,
with AGN making, at most, a very minor contribution to cosmic hydrogen
reionization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Finding Crash-Consistency Bugs with Bounded Black-Box Crash Testing | We present a new approach to testing file-system crash consistency: bounded
black-box crash testing (B3). B3 tests the file system in a black-box manner
using workloads of file-system operations. Since the space of possible
workloads is infinite, B3 bounds this space based on parameters such as the
number of file-system operations or which operations to include, and
exhaustively generates workloads within this bounded space. Each workload is
tested on the target file system by simulating power-loss crashes while the
workload is being executed, and checking if the file system recovers to a
correct state after each crash. B3 builds upon insights derived from our study
of crash-consistency bugs reported in Linux file systems in the last five
years. We observed that most reported bugs can be reproduced using small
workloads of three or fewer file-system operations on a newly-created file
system, and that all reported bugs result from crashes after fsync() related
system calls. We build two tools, CrashMonkey and ACE, to demonstrate the
effectiveness of this approach. Our tools are able to find 24 out of the 26
crash-consistency bugs reported in the last five years. Our tools also revealed
10 new crash-consistency bugs in widely-used, mature Linux file systems, seven
of which existed in the kernel since 2014. Our tools also found a
crash-consistency bug in a verified file system, FSCQ. The new bugs result in
severe consequences like broken rename atomicity and loss of persisted files.
| 1 | 0 | 0 | 0 | 0 | 0 |
Periodic solutions to the Cahn-Hilliard equation in the plane | In this paper we construct entire solutions to the Cahn-Hilliard equation
$-\Delta(-\Delta u+W^{'}(u))+W^{"}(u)(-\Delta u+W^{'}(u))=0$ in the Euclidean
plane, where $W(u)$ is the standard double-well potential $\frac{1}{4}
(1-u^2)^2$. Such solutions have a non-trivial profile that shadows a Willmore
planar curve, and converge uniformly to $\pm 1$ as $x_2 \to \pm \infty$. These
solutions give a counterexample to the counterpart of Gibbons' conjecture for
the fourth-order counterpart of the Allen-Cahn equation. We also study the
$x_2$-derivative of these solutions using the special structure of Willmore's
equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the kinetic equation in Zakharov's wave turbulence theory for capillary waves | The wave turbulence equation is an effective kinetic equation that describes
the dynamics of wave spectrum in weakly nonlinear and dispersive media. Such a
kinetic model has been derived by physicists in the sixties, though the
well-posedness theory remains open, due to the complexity of resonant
interaction kernels. In this paper, we provide a global unique radial strong
solution, the first such a result, to the wave turbulence equation for
capillary waves.
| 0 | 1 | 1 | 0 | 0 | 0 |
Regularizing Model Complexity and Label Structure for Multi-Label Text Classification | Multi-label text classification is a popular machine learning task where each
document is assigned with multiple relevant labels. This task is challenging
due to high dimensional features and correlated labels. Multi-label text
classifiers need to be carefully regularized to prevent the severe over-fitting
in the high dimensional space, and also need to take into account label
dependencies in order to make accurate predictions under uncertainty. We
demonstrate significant and practical improvement by carefully regularizing the
model complexity during training phase, and also regularizing the label search
space during prediction phase. Specifically, we regularize the classifier
training using Elastic-net (L1+L2) penalty for reducing model complexity/size,
and employ early stopping to prevent overfitting. At prediction time, we apply
support inference to restrict the search space to label sets encountered in the
training set, and F-optimizer GFM to make optimal predictions for the F1
metric. We show that although support inference only provides density
estimations on existing label combinations, when combined with GFM predictor,
the algorithm can output unseen label combinations. Taken collectively, our
experiments show state of the art results on many benchmark datasets. Beyond
performance and practical contributions, we make some interesting observations.
Contrary to the prior belief, which deems support inference as purely an
approximate inference procedure, we show that support inference acts as a
strong regularizer on the label prediction structure. It allows the classifier
to take into account label dependencies during prediction even if the
classifiers had not modeled any label dependencies during training.
| 1 | 0 | 0 | 1 | 0 | 0 |
Network Design with Probabilistic Capacities | We consider a network design problem with random arc capacities and give a
formulation with a probabilistic capacity constraint on each cut of the
network. To handle the exponentially-many probabilistic constraints a
separation procedure that solves a nonlinear minimum cut problem is introduced.
For the case with independent arc capacities, we exploit the supermodularity of
the set function defining the constraints and generate cutting planes based on
the supermodular covering knapsack polytope. For the general correlated case,
we give a reformulation of the constraints that allows to uncover and utilize
the submodularity of a related function. The computational results indicate
that exploiting the underlying submodularity and supermodularity arising with
the probabilistic constraints provides significant advantages over the
classical approaches.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nanopteron solutions of diatomic Fermi-Pasta-Ulam-Tsingou lattices with small mass-ratio | Consider an infinite chain of masses, each connected to its nearest neighbors
by a (nonlinear) spring. This is a Fermi-Pasta-Ulam-Tsingou lattice. We prove
the existence of traveling waves in the setting where the masses alternate in
size. In particular we address the limit where the mass ratio tends to zero.
The problem is inherently singular and we find that the traveling waves are not
true solitary waves but rather "nanopterons", which is to say, waves which
asymptotic at spatial infinity to very small amplitude periodic waves.
Moreover, we can only find solutions when the mass ratio lies in a certain open
set. The difficulties in the problem all revolve around understanding Jost
solutions of a nonlocal Schrödinger operator in its semi-classical limit.
| 0 | 0 | 1 | 0 | 0 | 0 |
Steering Orbital Optimization out of Local Minima and Saddle Points Toward Lower Energy | The general procedure underlying Hartree-Fock and Kohn-Sham density
functional theory calculations consists in optimizing orbitals for a
self-consistent solution of the Roothaan-Hall equations in an iterative
process. It is often ignored that multiple self-consistent solutions can exist,
several of which may correspond to minima of the energy functional. In addition
to the difficulty sometimes encountered to converge the calculation to a
self-consistent solution, one must ensure that the correct self-consistent
solution was found, typically the one with the lowest electronic energy.
Convergence to an unwanted solution is in general not trivial to detect and
will deliver incorrect energy and molecular properties, and accordingly a
misleading description of chemical reactivity. Wrong conclusions based on
incorrect self-consistent field convergence are particularly cumbersome in
automated calculations met in high-throughput virtual screening, structure
optimizations, ab initio molecular dynamics, and in real-time explorations of
chemical reactivity, where the vast amount of data can hardly be manually
inspected. Here, we introduce a fast and automated approach to detect and cure
incorrect orbital convergence, which is especially suited for electronic
structure calculations on sequences of molecular structures. Our approach
consists of a randomized perturbation of the converged electron density
(matrix) intended to push orbital convergence to solutions that correspond to
another stationary point (of potentially lower electronic energy) in the
variational parameter space of an electronic wave function approximation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mapping stellar content to dark matter halos - III.Environmental dependence and conformity of galaxy colours | Recent studies suggest that the quenching properties of galaxies are
correlated over several mega-parsecs. The large-scale "galactic conformity"
phenomenon around central galaxies has been regarded as a potential signature
of "galaxy assembly bias" or "pre-heating", both of which interpret conformity
as a result of direct environmental effects acting on galaxy formation.
Building on the iHOD halo quenching framework developed in Zu & Mandelbaum
(2015, 2016), we discover that our fiducial halo mass quenching model, without
any galaxy assembly bias, can successfully explain the overall environmental
dependence and the conformity of galaxy colours in SDSS, as measured by the
mark correlation functions of galaxy colours and the red galaxy fractions
around isolated primaries, respectively. Our fiducial iHOD halo quenching mock
also correctly predicts the differences in the spatial clustering and
galaxy-galaxy lensing signals between the more vs. less red galaxy subsamples,
split by the red-sequence ridge-line at fixed stellar mass. Meanwhile, models
that tie galaxy colours fully or partially to halo assembly bias have
difficulties in matching all these observables simultaneously. Therefore, we
demonstrate that the observed environmental dependence of galaxy colours can be
naturally explained by the combination of 1) halo quenching and 2) the
variation of halo mass function with environment --- an indirect environmental
effect mediated by two separate physical processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Blue Supergiant X-ray Binaries in the Nearby Dwarf Galaxy IC 10 | In young starburst galaxies, the X-ray population is expected to be dominated
by the relics of the most massive and short-lived stars, black-hole and
neutron-star high mass X-ray binaries (XRBs). In the closest such galaxy, IC
10, we have made a multi-wavelength census of these objects. Employing a novel
statistical correlation technique, we have matched our list of 110 X-ray point
sources, derived from a decade of Chandra observations, against published
photometric data. We report an 8 sigma correlation between the celestial
coordinates of the two catalogs, with 42 X-ray sources having an optical
counterpart. Applying an optical color-magnitude selection to isolate blue
supergiant (SG) stars in IC 10, we find 16 matches. Both cases show a
statistically significant overabundance versus the expectation value for chance
alignments. The blue objects also exhibit systematically higher fx/fv ratios
than other stars in the same magnitude range. Blue SG-XRBs include a major
class of progenitors of double-degenerate binaries, hence their numbers are an
important factor in modeling the rate of gravitational wave sources. We suggest
that the anomalous features of the IC 10 stellar population are explained if
the age of the IC 10 starburst is close to the time of the peak of interaction
for massive binaries.
| 0 | 1 | 0 | 0 | 0 | 0 |
ConsiDroid: A Concolic-based Tool for Detecting SQL Injection Vulnerability in Android Apps | In this paper, we present a concolic execution technique for detecting SQL
injection vulnerabilities in Android apps, with a new tool we called
ConsiDroid. We extend the source code of apps with mocking technique, such that
the execution of original source code is not affected. The extended source
codes can be treated as Java applications and may be executed by SPF with
concolic execution. We automatically produce a DummyMain class out of static
analysis such that the essential functions are called sequentially and, the
events leading to vulnerable functions are triggered. We extend SPF with taint
analysis in ConsiDroid. For making taint analysis possible, we introduce a new
technique of symbolic mock classes in order to ease the propagation of tainted
values in the code. An SQL injection vulnerability is detected through
receiving a tainted value by a vulnerable function. Besides, ConsiDroid takes
advantage of static analysis to adjust SPF in order to inspect only suspicious
paths. To illustrate the applicability of ConsiDroid, we have inspected
randomly selected 140 apps from F-Droid repository. From these apps, we found
three apps vulnerable to SQL injection. To verify their vulnerability, we
analyzed the apps manually based on ConsiDroid's reports by using Robolectric.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mesh Model (MeMo): A Systematic Approach to Agile System Engineering | Innovation and entrepreneurship have a very special role to play in creating
sustainable development in the world. Engineering design plays a major role in
innovation. These are not new facts. However this added to the fact that in
current time knowledge seem to increase at an exponential rate, growing twice
every few months. This creates a need to have newer methods to innovate with
very little scope to fall short of the expectations from customers. In terms of
reliable designing, system design tools and methodologies have been very
helpful and have been in use in most engineering industries for decades now.
But traditional system design is rigorous and rigid. As we can see, we need an
innovation system that should be rigorous and flexible at the same time. We
take our inspiration from biosphere, where some of the most rugged yet flexible
plants are creepers which grow to create mesh. In this thematic paper we shall
explain our approach to system engineering which we call the MeMo (Mesh Model)
that fuses the rigor of system engineering with the flexibility of agile
methods to create a scheme that can give rise to reliable innovation in the
high risk market of today.
| 1 | 0 | 0 | 0 | 0 | 0 |
Alternative derivation of exact law for compressible and isothermal magnetohydrodynamics turbulence | The exact law for fully developed homogeneous compressible
magnetohydrodynamics (CMHD) turbulence is derived. For an isothermal plasma,
without the assumption of isotropy, the exact law is expressed as a function of
the plasma velocity field, the compressible Alfvén velocity and the scalar
density, instead of the Elsässer variables used in previous works. The
theoretical results show four different types of terms that are involved in the
nonlinear cascade of the total energy in the inertial range. Each category is
examined in detail, in particular those that can be written either as source or
flux terms. Finally, the role of the background magnetic field $B_0$ is
highlighted and comparison with the incompressible MHD (IMHD) model is
discussed. This point is particularly important when testing the exact law on
numerical simulations and in situ observations in space plasmas.
| 0 | 1 | 0 | 0 | 0 | 0 |
CMS-HF Calorimeter Upgrade for Run II | CMS-HF Calorimeters have been undergoing a major upgrade for the last couple
of years to alleviate the problems encountered during Run I, especially in the
PMT and the readout systems. In this poster, the problems caused by the old
PMTs installed in the detectors and their solutions will be explained.
Initially, regular PMTs with thicker windows, causing large Cherenkov
radiation, were used. Instead of the light coming through the fibers from the
detector, stray muons passing through the PMT itself produce Cherenkov
radiation in the PMT window, resulting in erroneously large signals. Usually,
large signals are the result of very high-energy particles in the calorimeter
and are tagged as important. As a result, these so-called window events
generate false triggers. Four-anode PMTs with thinner windows were selected to
reduce these window events. Additional channels also help eliminate such
remaining events through algorithms comparing the output of different PMT
channels. During the EYETS 16/17 period in the LHC operations, the final
components of the modifications to the readout system, namely the two-channel
front-end electronics cards, are installed. Complete upgrade of the HF
Calorimeter, including the preparations for the Run II will be discussed in
this poster, with possible effects on the eventual data taking.
| 0 | 1 | 0 | 0 | 0 | 0 |
Is Flat Fielding Safe for Precision CCD Astronomy? | The ambitious goals of precision cosmology with wide-field optical surveys
such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope
(LSST) demand, as their foundation, precision CCD astronomy. This in turn
requires an understanding of previously uncharacterized sources of systematic
error in CCD sensors, many of which manifest themselves as static effective
variations in pixel area. Such variation renders a critical assumption behind
the traditional procedure of flat fielding--that a sensor's pixels comprise a
uniform grid--invalid. In this work, we present a method to infer a curl-free
model of a sensor's underlying pixel grid from flat field images, incorporating
the superposition of all electrostatic sensor effects--both known and
unknown--present in flat field data. We use these pixel grid models to estimate
the overall impact of sensor systematics on photometry, astrometry, and PSF
shape measurements in a representative sensor from the Dark Energy Camera
(DECam) and a prototype LSST sensor. Applying the method to DECam data recovers
known significant sensor effects for which corrections are currently being
developed within DES. For an LSST prototype CCD with pixel-response
non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat-fielding
on these observables is negligible in nominal .7" seeing conditions. These
errors scale linearly with the PRNU, so for future LSST production sensors,
which may have larger PRNU, our method provides a way to assess whether
pixel-level calibration beyond flat fielding will be required.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gas removal in the Ursa Minor galaxy: linking hydrodynamics and chemical evolution models | We present results from a non-cosmological, three-dimensional hydrodynamical
simulation of the gas in the dwarf spheroidal galaxy Ursa Minor. Assuming an
initial baryonic-to-dark-matter ratio derived from the cosmic microwave
background radiation, we evolved the galactic gas distribution over 3 Gyr,
taking into account the effects of the types Ia and II supernovae. For the
first time, we used in our simulation the instantaneous supernovae rates
derived from a chemical evolution model applied to spectroscopic observational
data of Ursa Minor. We show that the amount of gas that is lost in this process
is variable with time and radius, being the highest rates observed during the
initial 600 Myr in our simulation. Our results indicate that types Ia and II
supernovae must be essential drivers of the gas loss in Ursa Minor galaxy (and
probably in other similar dwarf galaxies), but it is ultimately the combination
of galactic winds powered by these supernovae and environmental effects (e.g.,
ram-pressure stripping) that results in the complete removal of the gas
content.
| 0 | 1 | 0 | 0 | 0 | 0 |
Identifiability and Estimation of Structural Vector Autoregressive Models for Subsampled and Mixed Frequency Time Series | Causal inference in multivariate time series is challenging due to the fact
that the sampling rate may not be as fast as the timescale of the causal
interactions. In this context, we can view our observed series as a subsampled
version of the desired series. Furthermore, due to technological and other
limitations, series may be observed at different sampling rates, representing a
mixed frequency setting. To determine instantaneous and lagged effects between
time series at the true causal scale, we take a model-based approach based on
structural vector autoregressive (SVAR) models. In this context, we present a
unifying framework for parameter identifiability and estimation under both
subsampling and mixed frequencies when the noise, or shocks, are non-Gaussian.
Importantly, by studying the SVAR case, we are able to both provide
identifiability and estimation methods for the causal structure of both lagged
and instantaneous effects at the desired time scale. We further derive an exact
EM algorithm for inference in both subsampled and mixed frequency settings. We
validate our approach in simulated scenarios and on two real world data sets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Towards Practical Differential Privacy for SQL Queries | Differential privacy promises to enable general data analytics while
protecting individual privacy, but existing differential privacy mechanisms do
not support the wide variety of features and databases used in real-world
SQL-based analytics systems.
This paper presents the first practical approach for differential privacy of
SQL queries. Using 8.1 million real-world queries, we conduct an empirical
study to determine the requirements for practical differential privacy, and
discuss limitations of previous approaches in light of these requirements. To
meet these requirements we propose elastic sensitivity, a novel method for
approximating the local sensitivity of queries with general equijoins. We prove
that elastic sensitivity is an upper bound on local sensitivity and can
therefore be used to enforce differential privacy using any local
sensitivity-based mechanism.
We build FLEX, a practical end-to-end system to enforce differential privacy
for SQL queries using elastic sensitivity. We demonstrate that FLEX is
compatible with any existing database, can enforce differential privacy for
real-world SQL queries, and incurs negligible (0.03%) performance overhead.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adapting Engineering Education to Industrie 4.0 Vision | Industrie 4.0 is originally a future vision described in the high-tech
strategy of the German government that is conceived upon the information and
communication technologies like Cyber-Physical Systems, Internet of Things,
Physical Internet and Internet of Services to achieve a high degree of
flexibility in production, higher productivity rates through real-time
monitoring and diagnosis, and a lower wastage rate of material in production.
An important part of the tasks in the preparation for Industrie 4.0 is the
adaption of the higher education to the requirements of this vision, in
particular the engineering education. In this work, we introduce a road map
consisting of three pillars describing the changes/enhancements to be conducted
in the areas of curriculum development, lab concept, and student club
activities. We also report our current application of this road map at the
Turkish-German University, Istanbul.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Survey on QoE-oriented Wireless Resources Scheduling | Future wireless systems are expected to provide a wide range of services to
more and more users. Advanced scheduling strategies thus arise not only to
perform efficient radio resource management, but also to provide fairness among
the users. On the other hand, the users' perceived quality, i.e., Quality of
Experience (QoE), is becoming one of the main drivers within the schedulers
design. In this context, this paper starts by providing a comprehension of what
is QoE and an overview of the evolution of wireless scheduling techniques.
Afterwards, a survey on the most recent QoE-based scheduling strategies for
wireless systems is presented, highlighting the application/service of the
different approaches reported in the literature, as well as the parameters that
were taken into account for QoE optimization. Therefore, this paper aims at
helping readers interested in learning the basic concepts of QoE-oriented
wireless resources scheduling, as well as getting in touch with the present
time research frontier.
| 1 | 0 | 0 | 0 | 0 | 0 |
Wandering domains for diffeomorphisms of the k-torus: a remark on a theorem by Norton and Sullivan | We show that there is no C^{k+1} diffeomorphism of the k-torus which is
semiconjugate to a minimal translation and has a wandering domain all of whose
iterates are Euclidean balls.
| 0 | 0 | 1 | 0 | 0 | 0 |
Orthogonal Statistical Learning | We provide excess risk guarantees for statistical learning in the presence of
an unknown nuisance component. We analyze a two-stage sample splitting
meta-algorithm that takes as input two arbitrary estimation algorithms: one for
the target model and one for the nuisance model. We show that if the population
risk satisfies a condition called Neyman orthogonality, the impact of the first
stage error on the excess risk bound achieved by the meta-algorithm is of
second order. Our general theorem is agnostic to the particular algorithms used
for the target and nuisance and only makes an assumption on their individual
performance. This enables the use of a plethora of existing results from
statistical learning and machine learning literature to give new guarantees for
learning with a nuisance component. Moreover, by focusing on excess risk rather
than parameter estimation, we can give guarantees under weaker assumptions than
in previous works and accommodate the case where the target parameter belongs
to a complex nonparametric class. When the nuisance and target parameters
belong to arbitrary classes, we characterize conditions on the metric entropy
such that oracle rates---rates of the same order as if we knew the nuisance
model---are achieved. We also analyze the rates achieved by specific estimation
algorithms such as variance-penalized empirical risk minimization, neural
network estimation and sparse high-dimensional linear model estimation. We
highlight the applicability of our results via four applications of primary
importance: 1) heterogeneous treatment effect estimation, 2) offline policy
optimization, 3) domain adaptation, and 4) learning with missing data.
| 1 | 0 | 1 | 1 | 0 | 0 |
Disentangling group and link persistence in Dynamic Stochastic Block models | We study the inference of a model of dynamic networks in which both
communities and links keep memory of previous network states. By considering
maximum likelihood inference from single snapshot observations of the network,
we show that link persistence makes the inference of communities harder,
decreasing the detectability threshold, while community persistence tends to
make it easier. We analytically show that communities inferred from single
network snapshot can share a maximum overlap with the underlying communities of
a specific previous instant in time. This leads to time-lagged inference: the
identification of past communities rather than present ones. Finally we compute
the time lag and propose a corrected algorithm, the Lagged Snapshot Dynamic
(LSD) algorithm, for community detection in dynamic networks. We analytically
and numerically characterize the detectability transitions of such algorithm as
a function of the memory parameters of the model and we make a comparison with
a full dynamic inference.
| 1 | 1 | 0 | 1 | 0 | 0 |
A Reactive and Efficient Walking Pattern Generator for Robust Bipedal Locomotion | Available possibilities to prevent a biped robot from falling down in the
presence of severe disturbances are mainly Center of Pressure (CoP) modulation,
step location and timing adjustment, and angular momentum regulation. In this
paper, we aim at designing a walking pattern generator which employs an optimal
combination of these tools to generate robust gaits. In this approach, first,
the next step location and timing are decided consistent with the commanded
walking velocity and based on the Divergent Component of Motion (DCM)
measurement. This stage which is done by a very small-size Quadratic Program
(QP) uses the Linear Inverted Pendulum Model (LIPM) dynamics to adapt the
switching contact location and time. Then, consistent with the first stage, the
LIPM with flywheel dynamics is used to regenerate the DCM and angular momentum
trajectories at each control cycle. This is done by modulating the CoP and
Centroidal Momentum Pivot (CMP) to realize a desired DCM at the end of current
step. Simulation results show the merit of this reactive approach in generating
robust and dynamically consistent walking patterns.
| 1 | 0 | 0 | 0 | 0 | 0 |
Frequentist Consistency of Variational Bayes | A key challenge for modern Bayesian statistics is how to perform scalable
inference of posterior distributions. To address this challenge, variational
Bayes (VB) methods have emerged as a popular alternative to the classical
Markov chain Monte Carlo (MCMC) methods. VB methods tend to be faster while
achieving comparable predictive performance. However, there are few theoretical
results around VB. In this paper, we establish frequentist consistency and
asymptotic normality of VB methods. Specifically, we connect VB methods to
point estimates based on variational approximations, called frequentist
variational approximations, and we use the connection to prove a variational
Bernstein-von Mises theorem. The theorem leverages the theoretical
characterizations of frequentist variational approximations to understand
asymptotic properties of VB. In summary, we prove that (1) the VB posterior
converges to the Kullback-Leibler (KL) minimizer of a normal distribution,
centered at the truth and (2) the corresponding variational expectation of the
parameter is consistent and asymptotically normal. As applications of the
theorem, we derive asymptotic properties of VB posteriors in Bayesian mixture
models, Bayesian generalized linear mixed models, and Bayesian stochastic block
models. We conduct a simulation study to illustrate these theoretical results.
| 1 | 0 | 1 | 1 | 0 | 0 |
Quantum variance on quaternion algebras, II | A method for determining quantum variance asymptotics on compact quotients
attached to non-split quaternion algebras is developed in general and applied
to "microlocal lifts" in the non-archimedean setting. The results obtained are
in the spirit of recent work of Sarnak--Zhao.
The arguments involve a careful analytic study of the theta correspondence,
the interplay between additive and multiplicative harmonic analysis on
quaternion algebras, the equidistribution of translates of elementary theta
functions, and the Rallis inner product formula.
| 0 | 0 | 1 | 0 | 0 | 0 |
An intracardiac electrogram model to bridge virtual hearts and implantable cardiac devices | Virtual heart models have been proposed to enhance the safety of implantable
cardiac devices through closed loop validation. To communicate with a virtual
heart, devices have been driven by cardiac signals at specific sites. As a
result, only the action potentials of these sites are sensed. However, the real
device implanted in the heart will sense a complex combination of near and
far-field extracellular potential signals. Therefore many device functions,
such as blanking periods and refractory periods, are designed to handle these
unexpected signals. To represent these signals, we develop an intracardiac
electrogram (IEGM) model as an interface between the virtual heart and the
device. The model can capture not only the local excitation but also far-field
signals and pacing afterpotentials. Moreover, the sensing controller can
specify unipolar or bipolar electrogram (EGM) sensing configurations and
introduce various oversensing and undersensing modes. The simulation results
show that the model is able to reproduce clinically observed sensing problems,
which significantly extends the capabilities of the virtual heart model in the
context of device validation.
| 1 | 1 | 0 | 0 | 0 | 0 |
A note on conditional versus joint unconditional weak convergence in bootstrap consistency results | The consistency of a bootstrap or resampling scheme is classically validated
by weak convergence of conditional laws. However, when working with stochastic
processes in the space of bounded functions and their weak convergence in the
Hoffmann-J{\o}rgensen sense, an obstacle occurs: due to possible
non-measurability, neither laws nor conditional laws are well-defined. Starting
from an equivalent formulation of weak convergence based on the bounded
Lipschitz metric, a classical circumvent is to formulate bootstrap consistency
in terms of the latter distance between what might be called a
\emph{conditional law} of the (non-measurable) bootstrap process and the law of
the limiting process. The main contribution of this note is to provide an
equivalent formulation of bootstrap consistency in the space of bounded
functions which is more intuitive and easy to work with. Essentially, the
equivalent formulation consists of (unconditional) weak convergence of the
original process jointly with two bootstrap replicates. As a by-product, we
provide two equivalent formulations of bootstrap consistency for statistics
taking values in separable metric spaces: the first in terms of (unconditional)
weak convergence of the statistic jointly with its bootstrap replicates, the
second in terms of convergence in probability of the empirical distribution
function of the bootstrap replicates. Finally, the asymptotic validity of
bootstrap-based confidence intervals and tests is briefly revisited, with
particular emphasis on the, in practice unavoidable, Monte Carlo approximation
of conditional quantiles.
| 0 | 0 | 1 | 1 | 0 | 0 |
Dynamic scaling analysis of the long-range RKKY Ising spin glass Dy$_{x}$Y$_{1-x}$Ru$_{2}$Si$_{2}$ | Dynamic scaling analyses of linear and nonlinear ac susceptibilities in a
model magnet of the long-rang RKKY Ising spin glass (SG)
Dy$_{x}$Y$_{1-x}$Ru$_{2}$Si$_{2}$ were examined. The obtained set of the
critical exponents, $\gamma$ $\sim$ 1, $\beta$ $\sim$ 1, $\delta$ $\sim$ 2, and
$z\nu$ $\sim$ 3.4, indicates the SG phase transition belongs to a different
universality class from either the canonical (Heisenberg) or the short-range
Ising SGs. The analyses also reveal a finite-temperature SG transition with the
same critical exponents under a magnetic field and the phase transition line
$T_{\mbox{g}}(H)$ described by $T_{\mbox{g}}(H)$ $=$
$T_{\mbox{g}}(0)(1-AH^{2/\phi})$ with $\phi$ $\sim$ 2. The crossover exponent
$\phi$ obeys the scaling relation $\phi$ $=$ $\gamma + \beta$ within the margin
of errors. These results strongly suggest the spontaneous
replica-symmetry-breaking (RSB) with a {\it non- or marginal-mean-field
universality class} in the long-range RKKY Ising SG.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Bayesian hierarchical model for related densities using Polya trees | Bayesian hierarchical models are used to share information between related
samples and obtain more accurate estimates of sample-level parameters, common
structure, and variation between samples. When the parameter of interest is the
distribution or density of a continuous variable, a hierarchical model for
distributions is required. A number of such models have been described in the
literature using extensions of the Dirichlet process and related processes,
typically as a distribution on the parameters of a mixing kernel. We propose a
new hierarchical model based on the Polya tree, which allows direct modeling of
densities and enjoys some computational advantages over the Dirichlet process.
The Polya tree also allows more flexible modeling of the variation between
samples, providing more informed shrinkage and permitting posterior inference
on the dispersion function, which quantifies the variation among sample
densities. We also show how the model can be extended to cluster samples in
situations where the observed samples are believed to have been drawn from
several latent populations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Groups of fast homeomorphisms of the interval and the ping-pong argument | We adapt the Ping-Pong Lemma, which historically was used to study free
products of groups, to the setting of the homeomorphism group of the unit
interval. As a consequence, we isolate a large class of generating sets for
subgroups of $\mathrm{Homeo}_+(I)$ for which certain finite dynamical data can
be used to determine the marked isomorphism type of the groups which they
generate. As a corollary, we will obtain a criteria for embedding subgroups of
$\mathrm{Homeo}_+(I)$ into Richard Thompson's group $F$. In particular, every
member of our class of generating sets generates a group which embeds into $F$
and in particular is not a free product. An analogous abstract theory is also
developed for groups of permutations of an infinite set.
| 0 | 0 | 1 | 0 | 0 | 0 |
The CodRep Machine Learning on Source Code Competition | CodRep is a machine learning competition on source code data. It is carefully
designed so that anybody can enter the competition, whether professional
researchers, students or independent scholars, without specific knowledge in
machine learning or program analysis. In particular, it aims at being a common
playground on which the machine learning and the software engineering research
communities can interact. The competition has started on April 14th 2018 and
has ended on October 14th 2018. The CodRep data is hosted at
this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Flag representations of mixed volumes and mixed functionals of convex bodies | Mixed volumes $V(K_1,\dots, K_d)$ of convex bodies $K_1,\dots ,K_d$ in
Euclidean space $\mathbb{R}^d$ are of central importance in the Brunn-Minkowski
theory. Representations for mixed volumes are available in special cases, for
example as integrals over the unit sphere with respect to mixed area measures.
More generally, in Hug-Rataj-Weil (2013) a formula for $V(K [n], M[d-n])$,
$n\in \{1,\dots ,d-1\}$, as a double integral over flag manifolds was
established which involved certain flag measures of the convex bodies $K$ and
$M$ (and required a general position of the bodies). In the following, we
discuss the general case $V(K_1[n_1],\dots , K_k[n_k])$, $n_1+\cdots +n_k=d$,
and show a corresponding result involving the flag measures
$\Omega_{n_1}(K_1;\cdot),\dots, \Omega_{n_k}(K_k;\cdot)$. For this purpose, we
first establish a curvature representation of mixed volumes over the normal
bundles of the bodies involved.
We also obtain a corresponding flag representation for the mixed functionals
from translative integral geometry and a local version, for mixed (translative)
curvature measures.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Hard Alignments with Variational Inference | There has recently been significant interest in hard attention models for
tasks such as object recognition, visual captioning and speech recognition.
Hard attention can offer benefits over soft attention such as decreased
computational cost, but training hard attention models can be difficult because
of the discrete latent variables they introduce. Previous work used REINFORCE
and Q-learning to approach these issues, but those methods can provide
high-variance gradient estimates and be slow to train. In this paper, we tackle
the problem of learning hard attention for a sequential task using variational
inference methods, specifically the recently introduced VIMCO and NVIL.
Furthermore, we propose a novel baseline that adapts VIMCO to this setting. We
demonstrate our method on a phoneme recognition task in clean and noisy
environments and show that our method outperforms REINFORCE, with the
difference being greater for a more complicated task.
| 1 | 0 | 0 | 1 | 0 | 0 |
A New Sparse and Robust Adaptive Lasso Estimator for the Independent Contamination Model | Many problems in signal processing require finding sparse solutions to
under-determined, or ill-conditioned, linear systems of equations. When dealing
with real-world data, the presence of outliers and impulsive noise must also be
accounted for. In past decades, the vast majority of robust linear regression
estimators has focused on robustness against rowwise contamination. Even so
called `high breakdown' estimators rely on the assumption that a majority of
rows of the regression matrix is not affected by outliers. Only very recently,
the first cellwise robust regression estimation methods have been developed. In
this paper, we define robust oracle properties, which an estimator must have in
order to perform robust model selection for under-determined, or
ill-conditioned linear regression models that are contaminated by cellwise
outliers in the regression matrix. We propose and analyze a robustly weighted
and adaptive Lasso type regularization term which takes into account cellwise
outliers for model selection. The proposed regularization term is integrated
into the objective function of the MM-estimator, which yields the proposed
MM-Robust Weighted Adaptive Lasso (MM-RWAL), for which we prove that at least
the weak robust oracle properties hold. A performance comparison to existing
robust Lasso estimators is provided using Monte Carlo experiments. Further, the
MM-RWAL is applied to determine the temporal releases of the European Tracer
Experiment (ETEX) at the source location. This ill-conditioned linear inverse
problem contains cellwise and rowwise outliers and is sparse both in the
regression matrix and the parameter vector. The proposed RWAL penalty is not
limited to the MM-estimator but can easily be integrated into the objective
function of other robust estimators.
| 0 | 0 | 1 | 1 | 0 | 0 |
Systematic Quantum Mechanical Region Determination in QM/MM Simulation | Hybrid quantum mechanical-molecular mechanical (QM/MM) simulations are widely
used in enzyme simulation. Over ten convergence studies of QM/MM methods have
revealed over the past several years that key energetic and structural
properties approach asymptotic limits with only very large (ca. 500-1000 atom)
QM regions. This slow convergence has been observed to be due in part to
significant charge transfer between the core active site and surrounding
protein environment, which cannot be addressed by improvement of MM force
fields or the embedding method employed within QM/MM. Given this slow
convergence, it becomes essential to identify strategies for the most
atom-economical determination of optimal QM regions and to gain insight into
the crucial interactions captured only in large QM regions. Here, we extend and
develop two methods for quantitative determination of QM regions. First, in the
charge shift analysis (CSA) method, we probe the reorganization of electron
density when core active site residues are removed completely, as determined by
large-QM region QM/MM calculations. Second, we introduce the
highly-parallelizable Fukui shift analysis (FSA), which identifies how
core/substrate frontier states are altered by the presence of an additional QM
residue on smaller initial QM regions. We demonstrate that the FSA and CSA
approaches are complementary and consistent on three test case enzymes:
catechol O-methyltransferase, cytochrome P450cam, and hen eggwhite lysozyme. We
also introduce validation strategies and test sensitivities of the two methods
to geometric structure, basis set size, and electronic structure methodology.
Both methods represent promising approaches for the systematic, unbiased
determination of quantum mechanical effects in enzymes and large systems that
necessitate multi-scale modeling.
| 0 | 1 | 0 | 0 | 0 | 0 |
Comparison of electricity market designs for stable decentralized power grids | In this study, we develop a theoretical model of strategic equilibrium
bidding and price-setting behaviour by heterogeneous and boundedly rational
electricity producers and a grid operator in a single electricity market under
uncertain information about production capabilities and electricity demand.
We compare eight different market design variants and several levels of
centralized electricity production that influence the spatial distribution of
producers in the grid, their unit production and curtailment costs, and the
mean and standard deviation of their production capabilities.
Our market design variants differ in three aspects. Producers are either paid
their individual bid price ("pay as bid") or the (higher) market price set by
the grid operator ("uniform pricing"). They are either paid for their bid
quantity ("pay requested") or for their actual supply ("pay supplied") which
may differ due to production uncertainty. Finally, excess production is either
required to be curtailed or may be supplied to the grid.
Overall, we find the combination of uniform pricing, paying for requested
amounts, and required curtailment to perform best or second best in many
respects, and to provide the best compromise between the goals of low economic
costs, low consumer costs, positive profits, low balancing, low workload, and
honest bidding behaviour.
| 0 | 1 | 0 | 0 | 0 | 0 |
Look Mum, no VM Exits! (Almost) | Multi-core CPUs are a standard component in many modern embedded systems.
Their virtualisation extensions enable the isolation of services, and gain
popularity to implement mixed-criticality or otherwise split systems. We
present Jailhouse, a Linux-based, OS-agnostic partitioning hypervisor that uses
novel architectural approaches to combine Linux, a powerful general-purpose
system, with strictly isolated special-purpose components. Our design goals
favour simplicity over features, establish a minimal code base, and minimise
hypervisor activity.
Direct assignment of hardware to guests, together with a deferred
initialisation scheme, offloads any complex hardware handling and bootstrapping
issues from the hypervisor to the general purpose OS. The hypervisor
establishes isolated domains that directly access physical resources without
the need for emulation or paravirtualisation. This retains, with negligible
system overhead, Linux's feature-richness in uncritical parts, while frugal
safety and real-time critical workloads execute in isolated, safe domains.
| 1 | 0 | 0 | 0 | 0 | 0 |
Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation | Low-rank modeling plays a pivotal role in signal processing and machine
learning, with applications ranging from collaborative filtering, video
surveillance, medical imaging, to dimensionality reduction and adaptive
filtering. Many modern high-dimensional data and interactions thereof can be
modeled as lying approximately in a low-dimensional subspace or manifold,
possibly with additional structures, and its proper exploitations lead to
significant reduction of costs in sensing, computation and storage. In recent
years, there is a plethora of progress in understanding how to exploit low-rank
structures using computationally efficient procedures in a provable manner,
including both convex and nonconvex approaches. On one side, convex relaxations
such as nuclear norm minimization often lead to statistically optimal
procedures for estimating low-rank matrices, where first-order methods are
developed to address the computational challenges; on the other side, there is
emerging evidence that properly designed nonconvex procedures, such as
projected gradient descent, often provide globally optimal solutions with a
much lower computational cost in many problems. This survey article will
provide a unified overview of these recent advances on low-rank matrix
estimation from incomplete measurements. Attention is paid to rigorous
characterization of the performance of these algorithms, and to problems where
the low-rank matrix have additional structural properties that require new
algorithmic designs and theoretical analysis.
| 0 | 0 | 0 | 1 | 0 | 0 |
Automatic Segmentation of the Left Ventricle in Cardiac CT Angiography Using Convolutional Neural Network | Accurate delineation of the left ventricle (LV) is an important step in
evaluation of cardiac function. In this paper, we present an automatic method
for segmentation of the LV in cardiac CT angiography (CCTA) scans. Segmentation
is performed in two stages. First, a bounding box around the LV is detected
using a combination of three convolutional neural networks (CNNs).
Subsequently, to obtain the segmentation of the LV, voxel classification is
performed within the defined bounding box using a CNN. The study included CCTA
scans of sixty patients, fifty scans were used to train the CNNs for the LV
localization, five scans were used to train LV segmentation and the remaining
five scans were used for testing the method. Automatic segmentation resulted in
the average Dice coefficient of 0.85 and mean absolute surface distance of 1.1
mm. The results demonstrate that automatic segmentation of the LV in CCTA scans
using voxel classification with convolutional neural networks is feasible.
| 1 | 1 | 0 | 0 | 0 | 0 |
Maximal Jacobian-based Saliency Map Attack | The Jacobian-based Saliency Map Attack is a family of adversarial attack
methods for fooling classification models, such as deep neural networks for
image classification tasks. By saturating a few pixels in a given image to
their maximum or minimum values, JSMA can cause the model to misclassify the
resulting adversarial image as a specified erroneous target class. We propose
two variants of JSMA, one which removes the requirement to specify a target
class, and another that additionally does not need to specify whether to only
increase or decrease pixel intensities. Our experiments highlight the
competitive speeds and qualities of these variants when applied to datasets of
hand-written digits and natural scenes.
| 0 | 0 | 0 | 1 | 0 | 0 |
Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator | Reinforcement learning (RL) has been successfully used to solve many
continuous control tasks. Despite its impressive results however, fundamental
questions regarding the sample complexity of RL on continuous problems remain
open. We study the performance of RL in this setting by considering the
behavior of the Least-Squares Temporal Difference (LSTD) estimator on the
classic Linear Quadratic Regulator (LQR) problem from optimal control. We give
the first finite-time analysis of the number of samples needed to estimate the
value function for a fixed static state-feedback policy to within
$\varepsilon$-relative error. In the process of deriving our result, we give a
general characterization for when the minimum eigenvalue of the empirical
covariance matrix formed along the sample path of a fast-mixing stochastic
process concentrates above zero, extending a result by Koltchinskii and
Mendelson in the independent covariates setting. Finally, we provide
experimental evidence indicating that our analysis correctly captures the
qualitative behavior of LSTD on several LQR instances.
| 1 | 0 | 0 | 1 | 0 | 0 |
VERITAS long term monitoring of Gamma-Ray emission from the BL Lacertae object | BL Lacertae is the prototype of the blazar subclass known as BL Lac type
objects. BL Lacertae object itself is a low-frequency-peaked BL Lac(LBL). Very
high energy (VHE) gamma ray emission from this source was discovered in 2005 by
MAGIC observatory while the source was at a flaring state. Since then, VHE
gamma rays from this source has been detected several times. However, all of
those times the source was in a high activity state. Former studies suggest
several non-thermal zones emitting in gamma-rays, then gamma-ray flare should
be composed of a convolution. Observing the BL Lacertae object at quiescent
states and active states is the key to disentangle these two components.
VERITAS is monitoring the BL Lacertae object since 2011. The archival data set
includes observations during flaring and quiescent states. This presentation
reports on the preliminary results of the VERITAS observation between January
2013 - December 2015, and simultaneous multiwavelength observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spin Susceptibility of the Topological Superconductor UPt3 from Polarized Neutron Diffraction | Experiment and theory indicate that UPt3 is a topological superconductor in
an odd-parity state, based in part from temperature independence of the NMR
Knight shift. However, quasiparticle spin-flip scattering near a surface, where
the Knight shift is measured, might be responsible. We use polarized neutron
scattering to measure the bulk susceptibility with H||c, finding consistency
with the Knight shift but inconsistent with theory for this field orientation.
We infer that neither spin susceptibility nor Knight shift are a reliable
indication of odd-parity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nano-jet Related to Bessel Beams and to Super-Resolutions in Micro-sphere Optical Experiments | The appearance of a Nano-jet in the micro-sphere optical experiments is
analyzed by relating this effect to non-diffracting Bessel beams. By inserting
a circular aperture with a radius which is in the order of subwavelength in the
EM waist, and sending the transmitted light into a confocal microscope, EM
fluctuations by the different Bessel beams are avoided. On this constant EM
field evanescent waves are superposed. While this effect improves the
optical-depth of the imaging process, the object fine-structures are obtained,
from the modulation of the EM fields by the evanescent waves. The use of a
combination of the micro-sphere optical system with an interferometer for phase
contrast measurements is described.
| 0 | 1 | 0 | 0 | 0 | 0 |
Factorization systems on (stable) derivators | We define triangulated factorization systems on triangulated categories, and
prove that a suitable subclass thereof (the normal triangulated torsion
theories) corresponds bijectively to $t$-structures on the same category. This
result is then placed in the framework of derivators regarding a triangulated
category as the base of a stable derivator. More generally, we define derivator
factorization systems in the 2-category $\mathrm{PDer}$, describing them as
algebras for a suitable strict 2-monad (this result is of independent
interest), and prove that a similar characterization still holds true: for a
stable derivator $\mathbb D$, a suitable class of derivator factorization
systems (the normal derivator torsion theories) correspond bijectively with
$t$-structures on the base $\mathbb{D}(\mathbb{1})$ of the derivator. These two
result can be regarded as the triangulated- and derivator- analogues,
respectively, of the theorem that says that `$t$-structures are normal torsion
theories' in the setting of stable $\infty$-categories, showing how the result
remains true whatever the chosen model for stable homotopy theory is.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamic Provable Data Possession Protocols with Public Verifiability and Data Privacy | Cloud storage services have become accessible and used by everyone.
Nevertheless, stored data are dependable on the behavior of the cloud servers,
and losses and damages often occur. One solution is to regularly audit the
cloud servers in order to check the integrity of the stored data. The Dynamic
Provable Data Possession scheme with Public Verifiability and Data Privacy
presented in ACISP'15 is a straightforward design of such solution. However,
this scheme is threatened by several attacks. In this paper, we carefully
recall the definition of this scheme as well as explain how its security is
dramatically menaced. Moreover, we proposed two new constructions for Dynamic
Provable Data Possession scheme with Public Verifiability and Data Privacy
based on the scheme presented in ACISP'15, one using Index Hash Tables and one
based on Merkle Hash Trees. We show that the two schemes are secure and
privacy-preserving in the random oracle model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robustness of persistent currents in two-dimensional Dirac systems with disorders | We consider two-dimensional (2D) Dirac quantum ring systems formed by the
infinite mass constraint. When an Aharonov-Bohm magnetic flux is present, e.g.,
through the center of the ring domain, persistent currents, i.e., permanent
currents without dissipation, can arise. In real materials, impurities and
defects are inevitable, raising the issue of robustness of the persistent
currents. Using localized random potential to simulate the disorders, we
investigate how the ensemble averaged current magnitude varies with the
disorder density. For comparison, we study the nonrelativistic quantum
counterpart by analyzing the solutions of the Schrödinger equation under
the same geometrical and disorder settings. We find that, for the Dirac ring
system, as the disorder density is systematically increased, the average
current decreases slowly initially and then plateaus at a finite nonzero value,
indicating remarkable robustness of the persistent currents. The physical
mechanism responsible for the robustness is the emergence of a class of
boundary states - whispering gallery modes. In contrast, in the Schrödinger
ring system, such boundary states cannot form and the currents diminish rapidly
to zero with increase in the disorder density. We develop a physical theory
based on a quasi one-dimensional approximation to understand the striking
contrast in the behaviors of the persistent currents in the Dirac and
Schrödinger rings. Our 2D Dirac ring systems with disorders can be
experimentally realized, e.g., on the surface of a topological insulator with
natural or deliberately added impurities from the fabrication process.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the arithmetically Cohen-Macaulay property for sets of points in multiprojective spaces | We study the arithmetically Cohen-Macaulay (ACM) property for finite sets of
points in multiprojective spaces, especially $(\mathbb P^1)^n$. A combinatorial
characterization, the $(\star)$-property, is known in $\mathbb P^1 \times
\mathbb P^1$. We propose a combinatorial property, $(\star_n)$, that directly
generalizes the $(\star)$-property to $(\mathbb P^1)^n$ for larger $n$. We show
that $X$ is ACM if and only if it satisfies the $(\star_n)$-property. The main
tool for several of our results is an extension to the multiprojective setting
of certain liaison methods in projective space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fitting 3D Shapes from Partial and Noisy Point Clouds with Evolutionary Computing | Point clouds obtained from photogrammetry are noisy and incomplete models of
reality. We propose an evolutionary optimization methodology that is able to
approximate the underlying object geometry on such point clouds. This approach
assumes a priori knowledge on the 3D structure modeled and enables the
identification of a collection of primitive shapes approximating the scene.
Built-in mechanisms that enforce high shape diversity and adaptive population
size make this method suitable to modeling both simple and complex scenes. We
focus here on the case of cylinder approximations and we describe, test, and
compare a set of mutation operators designed for optimal exploration of their
search space. We assess the robustness and limitations of this algorithm
through a series of synthetic examples, and we finally demonstrate its general
applicability on two real-life cases in vegetation and industrial settings.
| 1 | 0 | 0 | 0 | 1 | 0 |
Unique Information and Secret Key Decompositions | The unique information ($UI$) is an information measure that quantifies a
deviation from the Blackwell order. We have recently shown that this quantity
is an upper bound on the one-way secret key rate. In this paper, we prove a
triangle inequality for the $UI$, which implies that the $UI$ is never greater
than one of the best known upper bounds on the two-way secret key rate. We
conjecture that the $UI$ lower bounds the two-way rate and discuss implications
of the conjecture.
| 1 | 0 | 0 | 0 | 0 | 0 |
Decomposition Algorithm for Distributionally Robust Optimization using Wasserstein Metric | We study distributionally robust optimization (DRO) problems where the
ambiguity set is defined using the Wasserstein metric. We show that this class
of DRO problems can be reformulated as semi-infinite programs. We give an
exchange method to solve the reformulated problem for the general nonlinear
model, and a central cutting-surface method for the convex case, assuming that
we have a separation oracle. We used a distributionally robust generalization
of the logistic regression model to test our algorithm. Numerical experiments
on the distributionally robust logistic regression models show that the number
of oracle calls are typically 20 ? 50 to achieve 5-digit precision. The
solution found by the model is generally better in its ability to predict with
a smaller standard error.
| 0 | 0 | 1 | 0 | 0 | 0 |
PD-ML-Lite: Private Distributed Machine Learning from Lighweight Cryptography | Privacy is a major issue in learning from distributed data. Recently the
cryptographic literature has provided several tools for this task. However,
these tools either reduce the quality/accuracy of the learning
algorithm---e.g., by adding noise---or they incur a high performance penalty
and/or involve trusting external authorities.
We propose a methodology for {\sl private distributed machine learning from
light-weight cryptography} (in short, PD-ML-Lite). We apply our methodology to
two major ML algorithms, namely non-negative matrix factorization (NMF) and
singular value decomposition (SVD). Our resulting protocols are communication
optimal, achieve the same accuracy as their non-private counterparts, and
satisfy a notion of privacy---which we define---that is both intuitive and
measurable. Our approach is to use lightweight cryptographic protocols (secure
sum and normalized secure sum) to build learning algorithms rather than wrap
complex learning algorithms in a heavy-cost MPC framework.
We showcase our algorithms' utility and privacy on several applications: for
NMF we consider topic modeling and recommender systems, and for SVD, principal
component regression, and low rank approximation.
| 1 | 0 | 0 | 1 | 0 | 0 |
ISS Property with Respect to Boundary Disturbances for a Class of Riesz-Spectral Boundary Control Systems | This paper deals with the establishment of Input-to-State Stability (ISS)
properties for infinite dimensional systems with respect to both boundary and
distributed disturbances. First, an ISS estimate is established with respect to
finite dimensional boundary disturbances for a class of Riesz-spectral boundary
control systems satisfying certain eigenvalue constraints. Second, a concept of
weak solutions is introduced in order to relax the disturbances regularity
assumptions required to ensure the existence of strong solutions. The proposed
concept of weak solutions, that applies to a large class of boundary control
systems which is not limited to the Riesz-spectral ones, provides a natural
extension of the concept of both strong and mild solutions. Assuming that an
ISS estimate holds true for strong solutions, we show the existence, the
uniqueness, and the ISS property of the weak solutions.
| 1 | 0 | 0 | 0 | 0 | 0 |
English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor | Neural machine translation (NMT) has recently become popular in the field of
machine translation. However, NMT suffers from the problem of repeating or
missing words in the translation. To address this problem, Tu et al. (2017)
proposed an encoder-decoder-reconstructor framework for NMT using
back-translation. In this method, they selected the best forward translation
model in the same manner as Bahdanau et al. (2015), and then trained a
bi-directional translation model as fine-tuning. Their experiments show that it
offers significant improvement in BLEU scores in Chinese-English translation
task. We confirm that our re-implementation also shows the same tendency and
alleviates the problem of repeating and missing words in the translation on a
English-Japanese task too. In addition, we evaluate the effectiveness of
pre-training by comparing it with a jointly-trained model of forward
translation and back-translation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Consistency Guarantees for Permutation-Based Causal Inference Algorithms | Bayesian networks, or directed acyclic graph (DAG) models, are widely used to
represent complex causal systems. Since the basic task of learning a Bayesian
network from data is NP-hard, a standard approach is greedy search over the
space of DAGs or Markov equivalent DAGs. Since the space of DAGs on p nodes and
the associated space of Markov equivalence classes are both much larger than
the space of permutations, it is desirable to consider permutation-based
searches. We here provide the first consistency guarantees, both uniform and
high-dimensional, of a permutation-based greedy search. Geometrically, this
search corresponds to a simplex-type algorithm on a sub-polytope of the
permutohedron, the DAG associahedron. Every vertex in this polytope is
associated with a DAG, and hence with a collection of permutations that are
consistent with the DAG ordering. A walk is performed on the edges of the
polytope maximizing the sparsity of the associated DAGs. We show based on
simulations that this permutation search is competitive with standard
approaches.
| 0 | 0 | 1 | 1 | 0 | 0 |
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension | We present TriviaQA, a challenging reading comprehension dataset containing
over 650K question-answer-evidence triples. TriviaQA includes 95K
question-answer pairs authored by trivia enthusiasts and independently gathered
evidence documents, six per question on average, that provide high quality
distant supervision for answering the questions. We show that, in comparison to
other recently introduced large-scale datasets, TriviaQA (1) has relatively
complex, compositional questions, (2) has considerable syntactic and lexical
variability between questions and corresponding answer-evidence sentences, and
(3) requires more cross sentence reasoning to find answers. We also present two
baseline algorithms: a feature-based classifier and a state-of-the-art neural
network, that performs well on SQuAD reading comprehension. Neither approach
comes close to human performance (23% and 40% vs. 80%), suggesting that
TriviaQA is a challenging testbed that is worth significant future study. Data
and code available at -- this http URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimizing Node Discovery on Networks: Problem Definitions, Fast Algorithms, and Observations | Many people dream to become famous, YouTube video makers also wish their
videos to have a large audience, and product retailers always hope to expose
their products to customers as many as possible. Do these seemingly different
phenomena share a common structure? We find that fame, popularity, or exposure,
could be modeled as a node's discoverability on some properly defined network,
and all of the previously mentioned phenomena can be commonly stated as a
target node wants to be discovered easily by the other nodes in the network. In
this work, we explicitly define a node's discoverability in a network, and
formulate a general node discoverability optimization problem, where the goal
is to create a budgeted set of incoming edges to the target node so as to
optimize the target node's discoverability in the network. Although the
optimization problem is proven to be NP-hard, we find that the defined
discoverability measures have good properties that enable us to use a greedy
algorithm to find provably near-optimal solutions. The computational complexity
of a greedy algorithm is dominated by the time cost of an oracle call, i.e.,
calculating the marginal gain of a node. To scale up the oracle call over large
networks, we propose an estimation-and-refinement approach, that provides a
good trade-off between estimation accuracy and computational efficiency.
Experiments conducted on real-world networks demonstrate that our method is
thousands of times faster than an exact method using dynamic programming,
thereby allowing us to solve the node discoverability optimization problem on
large networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some properties of h-MN-convexity and Jensen's type inequalities | In this work, we introduce the class of $h$-${\rm{MN}}$-convex functions by
generalizing the concept of ${\rm{MN}}$-convexity and combining it with
$h$-convexity. Namely, Let $I,J$ be two intervals subset of
$\left(0,\infty\right)$ such that $\left(0,1\right)\subseteq J$ and
$\left[a,b\right]\subseteq I$. Consider a non-negative function $h:
(0,\infty)\to \left(0,\infty\right)$ and let ${\rm{M}}:\left[0,1\right]\to
\left[a,b\right] $ $(0<a<b)$ be a Mean function given by
${\rm{\rm{M}}}\left(t\right)={\rm{\rm{M}}}\left( {h(t);a,b} \right)$; where
by ${\rm{\rm{M}}}\left( {h(t);a,b} \right)$ we mean one of the following
functions: $A_h\left( {a,b} \right):=h\left( {1 - t} \right)a + h(t) b$,
$G_h\left( {a,b} \right)=a^{h(1-t)} b^{h(t)}$ and $H_h\left( {a,b}
\right):=\frac{ab}{h(t) a + h\left( {1 - t} \right)b} = \frac{1}{A_h\left(
{\frac{1}{a},\frac{1}{b}} \right)}$; with the property that
${\rm{\rm{M}}}\left( {h(0);a,b} \right)=a$ and ${\rm{M}}\left( {h(1);a,b}
\right)=b$.
A function $f : I \to \left(0,\infty\right)$ is said to be
$h$-${\rm{\rm{MN}}}$-convex (concave) if the inequality \begin{align*} f
\left({\rm{M}}\left(t;x, y\right)\right) \le (\ge) \, {\rm{N}}\left(h(t);f (x),
f (y)\right), \end{align*} holds for all $x,y \in I$ and $t\in [0,1]$, where M
and N are two mean functions. In this way, nine classes of
$h$-${\rm{MN}}$-convex functions are established and some of their analytic
properties are explored and investigated. Characterizations of each type are
given. Various Jensen's type inequalities and their converses are proved.
| 0 | 0 | 1 | 0 | 0 | 0 |
Impact of Traditional Sparse Optimizations on a Migratory Thread Architecture | Achieving high performance for sparse applications is challenging due to
irregular access patterns and weak locality. These properties preclude many
static optimizations and degrade cache performance on traditional systems. To
address these challenges, novel systems such as the Emu architecture have been
proposed. The Emu design uses light-weight migratory threads, narrow memory,
and near-memory processing capabilities to address weak locality and reduce the
total load on the memory system. Because the Emu architecture is fundamentally
different than cache based hierarchical memory systems, it is crucial to
understand the cost-benefit tradeoffs of standard sparse algorithm
optimizations on Emu hardware. In this work, we explore sparse matrix-vector
multiplication (SpMV) on the Emu architecture. We investigate the effects of
different sparse optimizations such as dense vector data layouts, work
distributions, and matrix reorderings. Our study finds that initially
distributing work evenly across the system is inadequate to maintain load
balancing over time due to the migratory nature of Emu threads. In severe
cases, matrix sparsity patterns produce hot-spots as many migratory threads
converge on a single resource. We demonstrate that known matrix reordering
techniques can improve SpMV performance on the Emu architecture by as much as
70% by encouraging more consistent load balancing. This can be compared with a
performance gain of no more than 16% on a cache-memory based system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Finding Bottlenecks: Predicting Student Attrition with Unsupervised Classifier | With pressure to increase graduation rates and reduce time to degree in
higher education, it is important to identify at-risk students early. Automated
early warning systems are therefore highly desirable. In this paper, we use
unsupervised clustering techniques to predict the graduation status of declared
majors in five departments at California State University Northridge (CSUN),
based on a minimal number of lower division courses in each major. In addition,
we use the detected clusters to identify hidden bottleneck courses.
| 1 | 0 | 0 | 1 | 0 | 0 |
Chemical evolution of 244Pu in the solar vicinity and its implication for the properties of r-process production | Meteoritic abundances of r-process elements are analyzed to deduce the
history of chemical enrichment by r-process from the beginning of disk
formation to the present time in the solar vicinity, by combining the abundance
information from short-lived radioactive nuclei such as 244Pu with that from
stable r-process nuclei such as Eu. These two types of nuclei can be associated
with one r-process event and cumulation of events till formation of the solar
system, respectively. With help of the observed local star formation history,
we deduce the chemical evolution of 244Pu and obtain three main results: (i)
the last r-process event occurred 130-140 Myr before formation of the solar
system, (ii) the present-day low 244Pu abundance as measured in deep sea
reservoirs results from the low recent star formation rate compared to ~4.5 - 5
Gyr ago, and (iii) there were ~15 r-process events in the solar vicinity from
formation of the Galaxy to the time of solar system formation and ~30 r-process
events to the present time. Then, adopting a reasonable hypothesis that a
neutron star merger is the r-process production site, we find that the ejected
r-process elements are extensively spread out and mixed with interstellar
matter with a mass of ~3.5 million solar masses, which is about 100 times
larger than that for supernova ejecta. In addition, the event frequency of
r-process production is estimated to be one per about 1400 core-collapse
supernovae, which is identical to the frequency of neutron star mergers
estimated from the analysis of stellar abundances.
| 0 | 1 | 0 | 0 | 0 | 0 |
SAVITR: A System for Real-time Location Extraction from Microblogs during Emergencies | We present SAVITR, a system that leverages the information posted on the
Twitter microblogging site to monitor and analyse emergency situations. Given
that only a very small percentage of microblogs are geo-tagged, it is essential
for such a system to extract locations from the text of the microblogs. We
employ natural language processing techniques to infer the locations mentioned
in the microblog text, in an unsupervised fashion and display it on a map-based
interface. The system is designed for efficient performance, achieving an
F-score of 0.79, and is approximately two orders of magnitude faster than other
available tools for location extraction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fractional Driven Damped Oscillator | The resonances associated with a fractional damped oscillator which is driven
by an oscillatory external force are studied. It is shown that such resonances
can be manipulated by tuning up either the coefficient of the fractional
damping or the order of the corresponding fractional derivatives.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bootstrap confidence sets for spectral projectors of sample covariance | Let $X_{1},\ldots,X_{n}$ be i.i.d. sample in $\mathbb{R}^{p}$ with zero mean
and the covariance matrix $\mathbf{\Sigma}$. The problem of recovering the
projector onto an eigenspace of $\mathbf{\Sigma}$ from these observations
naturally arises in many applications. Recent technique from [Koltchinskii,
Lounici, 2015] helps to study the asymptotic distribution of the distance in
the Frobenius norm $\| \mathbf{P}_r - \widehat{\mathbf{P}}_r \|_{2}$ between
the true projector $\mathbf{P}_r$ on the subspace of the $r$-th eigenvalue and
its empirical counterpart $\widehat{\mathbf{P}}_r$ in terms of the effective
rank of $\mathbf{\Sigma}$. This paper offers a bootstrap procedure for building
sharp confidence sets for the true projector $\mathbf{P}_r$ from the given
data. This procedure does not rely on the asymptotic distribution of $\|
\mathbf{P}_r - \widehat{\mathbf{P}}_r \|_{2}$ and its moments. It could be
applied for small or moderate sample size $n$ and large dimension $p$. The main
result states the validity of the proposed procedure for finite samples with an
explicit error bound for the error of bootstrap approximation. This bound
involves some new sharp results on Gaussian comparison and Gaussian
anti-concentration in high-dimensional spaces. Numeric results confirm a good
performance of the method in realistic examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
Locally recoverable codes from algebraic curves and surfaces | A locally recoverable code is a code over a finite alphabet such that the
value of any single coordinate of a codeword can be recovered from the values
of a small subset of other coordinates. Building on work of Barg, Tamo, and
Vlăduţ, we present several constructions of locally recoverable codes
from algebraic curves and surfaces.
| 1 | 0 | 1 | 0 | 0 | 0 |
Specification properties on uniform spaces | In the following text we introduce specification property (stroboscopical
property) for dynamical systems on uniform space. We focus on two classes of
dynamical systems: generalized shifts and dynamical systems with Alexandroff
compactification of a discrete space as phase space. We prove that for a
discrete finite topological space $X$ with at least two elements, a nonempty
set $\Gamma$ and a self--map $\varphi:\Gamma\to\Gamma$ the generalized shift
dynamical system $(X^\Gamma,\sigma_\varphi)$: \begin{itemize} \item has
(almost) weak specification property if and only if $\varphi:\Gamma\to\Gamma$
does not have any periodic point,
\item has (uniform) stroboscopical property if and only if
$\varphi:\Gamma\to\Gamma$
is one-to-one. \end{itemize}
| 0 | 0 | 1 | 0 | 0 | 0 |
Tidal tails around the outer halo globular clusters Eridanus and Palomar 15 | We report the discovery of tidal tails around the two outer halo globular
clusters, Eridanus and Palomar 15, based on $gi$-band images obtained with
DECam at the CTIO 4-m Blanco Telescope. The tidal tails are among the most
remote stellar streams presently known in the Milky Way halo. Cluster members
have been determined from the color-magnitude diagrams and used to establish
the radial density profiles, which show, in both cases, a strong departure in
the outer regions from the best-fit King profile. Spatial density maps reveal
tidal tails stretching out on opposite sides of both clusters, extending over a
length of $\sim$760 pc for Eridanus and $\sim$1160 pc for Palomar 15. The great
circle projected from the Palomar 15 tidal tails encompasses the Galactic
Center, while that for Eridanus passes close to four dwarf satellite galaxies,
one of which (Sculptor) is at a comparable distance to that of Eridanus.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning by Playing - Solving Sparse Reward Tasks from Scratch | We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in
the context of Reinforcement Learning (RL). SAC-X enables learning of complex
behaviors - from scratch - in the presence of multiple sparse reward signals.
To this end, the agent is equipped with a set of general auxiliary tasks, that
it attempts to learn simultaneously via off-policy RL. The key idea behind our
method is that active (learned) scheduling and execution of auxiliary policies
allows the agent to efficiently explore its environment - enabling it to excel
at sparse reward RL. Our experiments in several challenging robotic
manipulation settings demonstrate the power of our approach.
| 1 | 0 | 0 | 1 | 0 | 0 |
An Infinite Hidden Markov Model With Similarity-Biased Transitions | We describe a generalization of the Hierarchical Dirichlet Process Hidden
Markov Model (HDP-HMM) which is able to encode prior information that state
transitions are more likely between "nearby" states. This is accomplished by
defining a similarity function on the state space and scaling transition
probabilities by pair-wise similarities, thereby inducing correlations among
the transition distributions. We present an augmented data representation of
the model as a Markov Jump Process in which: (1) some jump attempts fail, and
(2) the probability of success is proportional to the similarity between the
source and destination states. This augmentation restores conditional conjugacy
and admits a simple Gibbs sampler. We evaluate the model and inference method
on a speaker diarization task and a "harmonic parsing" task using four-part
chorale data, as well as on several synthetic datasets, achieving favorable
comparisons to existing models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Methods for Estimation of Convex Sets | In the framework of shape constrained estimation, we review methods and works
done in convex set estimation. These methods mostly build on stochastic and
convex geometry, empirical process theory, functional analysis, linear
programming, extreme value theory, etc. The statistical problems that we review
include density support estimation, estimation of the level sets of densities
or depth functions, nonparametric regression, etc. We focus on the estimation
of convex sets under the Nikodym and Hausdorff metrics, which require different
techniques and, quite surprisingly, lead to very different results, in
particular in density support estimation. Finally, we discuss computational
issues in high dimensions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Dynamics of Charged Bulk Viscous Collapsing Cylindrical Source With Heat Flux | In this paper, we have explored the effects of dissipation on the dynamics of
charged bulk viscous collapsing cylindrical source which allows the out follow
of heat flux in the form of radiations. Misner-Sharp formulism has been
implemented to drive the dynamical equation in term of proper time and radial
derivatives. We have investigated the effects of charge and bulk viscosity on
the dynamics of collapsing cylinder. To determine the effects of radial heat
flux, we have formulated the heat transport equations in the context of
M$\ddot{u}$ller-Israel-Stewart theory by assuming that thermodynamics
viscous/heat coupling coefficients can be neglected within some approximations.
In our discussion, we have introduced the viscosity by the standard
(non-casual) thermodynamics approach. The dynamical equations have been coupled
with the heat transport equation equation, the consequences of resulting
coupled heat equation have been analyzed in detail.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.