title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
The Emission Structure of Formaldehyde MegaMasers | The formaldehyde MegaMaser emission has been mapped for the three host
galaxies IC\,860. IRAS\,15107$+$0724, and Arp\,220. Elongated emission
components are found at the nuclear centres of all galaxies with an extent
ranging between 30 to 100 pc. These components are superposed on the peaks of
the nuclear continuum. Additional isolated emission components are found
superposed in the outskirts of the radio continuum structure. The brightness
temperatures of the detected features ranges from 0.6 to 13.4 $\times 10^{4}$
K, which confirms their masering nature. The masering scenario is interpreted
as amplification of the radio continuum by foreground molecular gas that is
pumped by far-infrared radiation fields in these starburst environments of the
host galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Variational approach for learning Markov processes from time series data | Inference, prediction and control of complex dynamical systems from time
series is important in many areas, including financial markets, power grid
management, climate and weather modeling, or molecular dynamics. The analysis
of such highly nonlinear dynamical systems is facilitated by the fact that we
can often find a (generally nonlinear) transformation of the system coordinates
to features in which the dynamics can be excellently approximated by a linear
Markovian model. Moreover, the large number of system variables often change
collectively on large time- and length-scales, facilitating a low-dimensional
analysis in feature space. In this paper, we introduce a variational approach
for Markov processes (VAMP) that allows us to find optimal feature mappings and
optimal Markovian models of the dynamics from given time series data. The key
insight is that the best linear model can be obtained from the top singular
components of the Koopman operator. This leads to the definition of a family of
score functions called VAMP-r which can be calculated from data, and can be
employed to optimize a Markovian model. In addition, based on the relationship
between the variational scores and approximation errors of Koopman operators,
we propose a new VAMP-E score, which can be applied to cross-validation for
hyper-parameter optimization and model selection in VAMP. VAMP is valid for
both reversible and nonreversible processes and for stationary and
non-stationary processes or realizations.
| 0 | 0 | 0 | 1 | 0 | 0 |
A unifying framework for the modelling and analysis of STR DNA samples arising in forensic casework | This paper presents a new framework for analysing forensic DNA samples using
probabilistic genotyping. Specifically it presents a mathematical framework for
specifying and combining the steps in producing forensic casework
electropherograms of short tandem repeat loci from DNA samples. It is
applicable to both high and low template DNA samples, that is, samples
containing either high or low amounts DNA. A specific model is developed within
the framework, by way of particular modelling assumptions and approximations,
and its interpretive power presented on examples using simulated data and data
from a publicly available dataset. The framework relies heavily on the use of
univariate and multivariate probability generating functions. It is shown that
these provide a succinct and elegant mathematical scaffolding to model the key
steps in the process. A significant development in this paper is that of new
numerical methods for accurately and efficiently evaluating the probability
distribution of amplicons arising from the polymerase chain reaction process,
which is modelled as a discrete multi-type branching process. Source code in
the scripting languages Python, R and Julia is provided for illustration of
these methods. These new developments will be of general interest to persons
working outside the province of forensic DNA interpretation that this paper
focuses on.
| 0 | 0 | 0 | 1 | 1 | 0 |
A new class of ferromagnetic semiconductors with high Curie temperatures | Ferromagnetic semiconductors (FMSs), which have the properties and
functionalities of both semiconductors and ferromagnets, provide fascinating
opportunities for basic research in condensed matter physics and device
applications. Over the past two decades, however, intensive studies on various
FMS materials, inspired by the influential mean-field Zener (MFZ) model have
failed to realise reliable FMSs that have a high Curie temperature (Tc > 300
K), good compatibility with semiconductor electronics, and characteristics
superior to those of their non-magnetic host semiconductors. Here, we
demonstrate a new n type Fe-doped narrow-gap III-V FMS, (In,Fe)Sb, in which
ferromagnetic order is induced by electron carriers, and its Tc is unexpectedly
high, reaching ~335 K at a modest Fe concentration of 16%. Furthermore, we show
that by utilizing the large anomalous Hall effect of (In,Fe)Sb at room
temperature, it is possible to obtain a Hall sensor with a very high
sensitivity that surpasses that of the best commercially available InSb Hall
sensor devices. Our results reveal a new design rule of FMSs that is not
expected from the conventional MFZ model. (This work was presented at the JSAP
Spring meeting, presentation No. E15a-501-2:
this https URL)
| 0 | 1 | 0 | 0 | 0 | 0 |
On a Distributed Approach for Density-based Clustering | Efficient extraction of useful knowledge from these data is still a
challenge, mainly when the data is distributed, heterogeneous and of different
quality depending on its corresponding local infrastructure. To reduce the
overhead cost, most of the existing distributed clustering approaches generate
global models by aggregating local results obtained on each individual node.
The complexity and quality of solutions depend highly on the quality of the
aggregation. In this respect, we proposed for distributed density-based
clustering that both reduces the communication overheads due to the data
exchange and improves the quality of the global models by considering the
shapes of local clusters. From preliminary results we show that this algorithm
is very promising.
| 1 | 0 | 0 | 0 | 0 | 0 |
Accretion of Planetary Material onto Host Stars | Accretion of planetary material onto host stars may occur throughout a star's
life. Especially prone to accretion, extrasolar planets in short-period orbits,
while relatively rare, constitute a significant fraction of the known
population, and these planets are subject to dynamical and atmospheric
influences that can drive significant mass loss. Theoretical models frame
expectations regarding the rates and extent of this planetary accretion. For
instance, tidal interactions between planets and stars may drive complete
orbital decay during the main sequence. Many planets that survive their stars'
main sequence lifetime will still be engulfed when the host stars become red
giant stars. There is some observational evidence supporting these predictions,
such as a dearth of close-in planets around fast stellar rotators, which is
consistent with tidal spin-up and planet accretion. There remains no clear
chemical evidence for pollution of the atmospheres of main sequence or red
giant stars by planetary materials, but a wealth of evidence points to active
accretion by white dwarfs. In this article, we review the current understanding
of accretion of planetary material, from the pre- to the post-main sequence and
beyond. The review begins with the astrophysical framework for that process and
then considers accretion during various phases of a host star's life, during
which the details of accretion vary, and the observational evidence for
accretion during these phases.
| 0 | 1 | 0 | 0 | 0 | 0 |
High-Fidelity, Single-Shot, Quantum-Logic-Assisted Readout in a Mixed-Species Ion Chain | We use a co-trapped ion ($^{88}\mathrm{Sr}^{+}$) to sympathetically cool and
measure the quantum state populations of a memory-qubit ion of a different
atomic species ($^{40}\mathrm{Ca}^{+}$) in a cryogenic, surface-electrode ion
trap. Due in part to the low motional heating rate demonstrated here, the state
populations of the memory ion can be transferred to the auxiliary ion by using
the shared motion as a quantum state bus and measured with an average accuracy
of 96(1)%. This scheme can be used in quantum information processors to reduce
photon-scattering-induced error in unmeasured memory qubits.
| 0 | 1 | 0 | 0 | 0 | 0 |
Investigation of faint galactic carbon stars from the first Byurakan spectral survey. III. Infrared characteristics | Infra-Red(IR) astronomical databases, namely, IRAS, 2MASS, WISE, and Spitzer,
are used to analyze photometric data of 126 carbon stars whose spectra are
visible in the First Byurakan Survey low-resolution spectral plates. Among
these, six new objects, recently confirmed on the digitized FBS plates, are
included. For three of them, moderate-resolution CCD optical spectra are also
presented. In this work several IR color-color diagrams are studied. Early and
late-type C stars are separated in the JHK Near-Infra-Red(NIR) color-color
plots, as well as in the WISE W3-W4 versus W1-W2 diagram. Late N-type
Asymptotic Giant Branch stars are redder in W1-W2, while early-types(CH and R
giants) are redder in W3-W4 as expected. Objects with W2-W3 > 1.0 mag. show
double-peaked spectral energy distribution, indicating the existence of the
circumstellar envelopes around them. 26 N-type stars have IRAS Point Source
Catalog(PSC) associations. For FBS 1812+455 IRAS Low-Resolution Spectra in the
wavelength range 7.7 - 22.6micron and Spitzer Space Telescope Spectra in the
range 5 - 38micro are presented clearly showing absorption features of
C2H2(acetylene) molecule at 7.5 and 13.7micron , and the SiC(silicone carbide)
emission at 11.3micron. The mass-loss rates for eight Mira-type variables are
derived from the K-[12] color and from the pulsation periods. The reddest
object among the targets is N-type C star FBS 2213+421, which belong to the
group of the cold post-AGB R Coronae Borealis(R CrB) variables.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantifying Interpretability and Trust in Machine Learning Systems | Decisions by Machine Learning (ML) models have become ubiquitous. Trusting
these decisions requires understanding how algorithms take them. Hence
interpretability methods for ML are an active focus of research. A central
problem in this context is that both the quality of interpretability methods as
well as trust in ML predictions are difficult to measure. Yet evaluations,
comparisons and improvements of trust and interpretability require quantifiable
measures. Here we propose a quantitative measure for the quality of
interpretability methods. Based on that we derive a quantitative measure of
trust in ML decisions. Building on previous work we propose to measure
intuitive understanding of algorithmic decisions using the information transfer
rate at which humans replicate ML model predictions. We provide empirical
evidence from crowdsourcing experiments that the proposed metric robustly
differentiates interpretability methods. The proposed metric also demonstrates
the value of interpretability for ML assisted human decision making: in our
experiments providing explanations more than doubled productivity in annotation
tasks. However unbiased human judgement is critical for doctors, judges, policy
makers and others. Here we derive a trust metric that identifies when human
decisions are overly biased towards ML predictions. Our results complement
existing qualitative work on trust and interpretability by quantifiable
measures that can serve as objectives for further improving methods in this
field of research.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Intertropical Convergence Zone | This activity has been developed as a resource for the "EU Space Awareness"
educational programme. As part of the suite "Our Fragile Planet" together with
the "Climate Box" it addresses aspects of weather phenomena, the Earth's
climate and climate change as well as Earth observation efforts like in the
European "Copernicus" programme. This resource consists of three parts that
illustrate the power of the Sun driving a global air circulation system that is
also responsible for tropical and subtropical climate zones. Through
experiments, students learn how heated air rises above cool air and how a
continuous heat source produces air convection streams that can even drive a
propeller. Students then apply what they have learnt to complete a worksheet
that presents the big picture of the global air circulation system of the
equator region by transferring the knowledge from the previous activities in to
a larger scale.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonconvex Sparse Logistic Regression with Weakly Convex Regularization | In this work we propose to fit a sparse logistic regression model by a weakly
convex regularized nonconvex optimization problem. The idea is based on the
finding that a weakly convex function as an approximation of the $\ell_0$
pseudo norm is able to better induce sparsity than the commonly used $\ell_1$
norm. For a class of weakly convex sparsity inducing functions, we prove the
nonconvexity of the corresponding sparse logistic regression problem, and study
its local optimality conditions and the choice of the regularization parameter
to exclude trivial solutions. Despite the nonconvexity, a method based on
proximal gradient descent is used to solve the general weakly convex sparse
logistic regression, and its convergence behavior is studied theoretically.
Then the general framework is applied to a specific weakly convex function, and
a necessary and sufficient local optimality condition is provided. The solution
method is instantiated in this case as an iterative firm-shrinkage algorithm,
and its effectiveness is demonstrated in numerical experiments by both randomly
generated and real datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Inapproximability of the independent set polynomial in the complex plane | We study the complexity of approximating the independent set polynomial
$Z_G(\lambda)$ of a graph $G$ with maximum degree $\Delta$ when the activity
$\lambda$ is a complex number.
This problem is already well understood when $\lambda$ is real using
connections to the $\Delta$-regular tree $T$. The key concept in that case is
the "occupation ratio" of the tree $T$. This ratio is the contribution to
$Z_T(\lambda)$ from independent sets containing the root of the tree, divided
by $Z_T(\lambda)$ itself. If $\lambda$ is such that the occupation ratio
converges to a limit, as the height of $T$ grows, then there is an FPTAS for
approximating $Z_G(\lambda)$ on a graph $G$ with maximum degree $\Delta$.
Otherwise, the approximation problem is NP-hard.
Unsurprisingly, the case where $\lambda$ is complex is more challenging.
Peters and Regts identified the complex values of $\lambda$ for which the
occupation ratio of the $\Delta$-regular tree converges. These values carve a
cardioid-shaped region $\Lambda_\Delta$ in the complex plane. Motivated by the
picture in the real case, they asked whether $\Lambda_\Delta$ marks the true
approximability threshold for general complex values $\lambda$.
Our main result shows that for every $\lambda$ outside of $\Lambda_\Delta$,
the problem of approximating $Z_G(\lambda)$ on graphs $G$ with maximum degree
at most $\Delta$ is indeed NP-hard. In fact, when $\lambda$ is outside of
$\Lambda_\Delta$ and is not a positive real number, we give the stronger result
that approximating $Z_G(\lambda)$ is actually #P-hard. If $\lambda$ is a
negative real number outside of $\Lambda_\Delta$, we show that it is #P-hard to
even decide whether $Z_G(\lambda)>0$, resolving in the affirmative a conjecture
of Harvey, Srivastava and Vondrak.
Our proof techniques are based around tools from complex analysis -
specifically the study of iterative multivariate rational maps.
| 1 | 0 | 0 | 0 | 0 | 0 |
A giant with feet of clay: on the validity of the data that feed machine learning in medicine | This paper considers the use of Machine Learning (ML) in medicine by focusing
on the main problem that this computational approach has been aimed at solving
or at least minimizing: uncertainty. To this aim, we point out how uncertainty
is so ingrained in medicine that it biases also the representation of clinical
phenomena, that is the very input of ML models, thus undermining the clinical
significance of their output. Recognizing this can motivate both medical
doctors, in taking more responsibility in the development and use of these
decision aids, and the researchers, in pursuing different ways to assess the
value of these systems. In so doing, both designers and users could take this
intrinsic characteristic of medicine more seriously and consider alternative
approaches that do not "sweep uncertainty under the rug" within an objectivist
fiction, which everyone can come up by believing as true.
| 1 | 0 | 0 | 1 | 0 | 0 |
Constraining accretion signatures of exoplanets in the TW Hya transitional disk | We present a near-infrared direct imaging search for accretion signatures of
possible protoplanets around the young stellar object (YSO) TW Hya, a
multi-ring disk exhibiting evidence of planet formation. The Pa$\beta$ line
(1.282 $\mu$m) is an indication of accretion onto a protoplanet, and its
intensity is much higher than that of blackbody radiation from the protoplanet.
We focused on the Pa$\beta$ line and performed Keck/OSIRIS spectroscopic
observations. Although spectral differential imaging (SDI) reduction detected
no accretion signatures, the results of the present study allowed us to set
5$\sigma$ detection limits for Pa$\beta$ emission of $5.8\times10^{-18}$ and
$1.5\times10^{-18}$ erg/s/cm$^2$ at 0\farcs4 and 1\farcs6, respectively. We
considered the mass of potential planets using theoretical simulations of
circumplanetary disks and hydrogen emission. The resulting masses were $1.45\pm
0.04$ M$_{\rm J}$ and $2.29 ^{+0.03}_{-0.04}$ M$_{\rm J}$ at 25 and 95 AU,
respectively, which agree with the detection limits obtained from previous
broadband imaging. The detection limits should allow the identification of
protoplanets as small as $\sim$1 M$_{\rm J}$, which may assist in direct
imaging searches around faint YSOs for which extreme adaptive optics
instruments are unavailable.
| 0 | 1 | 0 | 0 | 0 | 0 |
Obstructions to planarity of contact 3-manifolds | We prove that if a contact 3-manifold admits an open book decomposition of
genus 0, a certain intersection pattern cannot appear in the homology of any of
its symplectic fillings, and morever, fillings cannot contain certain
symplectic surfaces. Applying these obstructions to canonical contact
structures on links of normal surface singularities, we show that links of
isolated singularities of surfaces in the complex 3-space are planar only in
the case of $A_n$-singularities, and in general characterize completely planar
links of normal surface singularities (in terms of their resolution graphs). We
also establish non-planarity of tight contact structures on certain small
Seifert fibered L-spaces and of contact structures compatible with open books
given by a boundary multi-twist on a page of positive genus. Additionally, we
prove that every finitely presented group is the fundamental group of a
Leschetz fibration with planar fibers.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bounds on harmonic radius and limits of manifolds with bounded Bakry-Émery Ricci curvature | Under the usual condition that the volume of a geodesic ball is close to the
Euclidean one or the injectivity radii is bounded from below, we prove a lower
bound of the $C^{\alpha} W^{1, q}$ harmonic radius for manifolds with bounded
Bakry-Émery Ricci curvature when the gradient of the potential is bounded.
Under these conditions, the regularity that can be imposed on the metrics under
harmonic coordinates is only $C^\alpha W^{1,q}$, where $q>2n$ and $n$ is the
dimension of the manifolds. This is almost 1 order lower than that in the
classical $C^{1,\alpha} W^{2, p}$ harmonic coordinates under bounded Ricci
curvature condition [And]. The loss of regularity induces some difference in
the method of proof, which can also be used to address the detail of $W^{2, p}$
convergence in the classical case.
Based on this lower bound and the techniques in [ChNa2] and [WZ], we extend
Cheeger-Naber's Codimension 4 Theorem in [ChNa2] to the case where the
manifolds have bounded Bakry-Émery Ricci curvature when the gradient of the
potential is bounded. This result covers Ricci solitons when the gradient of
the potential is bounded.
During the proof, we will use a Green's function argument and adopt a linear
algebra argument in [Bam]. A new ingradient is to show that the diagonal
entries of the matrices in the Transformation Theorem are bounded away from 0.
Together these seem to simplify the proof of the Codimension 4 Theorem, even in
the case where Ricci curvature is bounded.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam | Uncertainty computation in deep learning is essential to design robust and
reliable systems. Variational inference (VI) is a promising approach for such
computation, but requires more effort to implement and execute compared to
maximum-likelihood methods. In this paper, we propose new natural-gradient
algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms
can be implemented within the Adam optimizer by perturbing the network weights
during gradient evaluations, and uncertainty estimates can be cheaply obtained
by using the vector that adapts the learning rate. This requires lower memory,
computation, and implementation effort than existing VI methods, while
obtaining uncertainty estimates of comparable quality. Our empirical results
confirm this and further suggest that the weight-perturbation in our algorithm
could be useful for exploration in reinforcement learning and stochastic
optimization.
| 0 | 0 | 0 | 1 | 0 | 0 |
Improving average ranking precision in user searches for biomedical research datasets | Availability of research datasets is keystone for health and life science
study reproducibility and scientific progress. Due to the heterogeneity and
complexity of these data, a main challenge to be overcome by research data
management systems is to provide users with the best answers for their search
queries. In the context of the 2016 bioCADDIE Dataset Retrieval Challenge, we
investigate a novel ranking pipeline to improve the search of datasets used in
biomedical experiments. Our system comprises a query expansion model based on
word embeddings, a similarity measure algorithm that takes into consideration
the relevance of the query terms, and a dataset categorisation method that
boosts the rank of datasets matching query constraints. The system was
evaluated using a corpus with 800k datasets and 21 annotated user queries. Our
system provides competitive results when compared to the other challenge
participants. In the official run, it achieved the highest infAP among the
participants, being +22.3% higher than the median infAP of the participant's
best submissions. Overall, it is ranked at top 2 if an aggregated metric using
the best official measures per participant is considered. The query expansion
method showed positive impact on the system's performance increasing our
baseline up to +5.0% and +3.4% for the infAP and infNDCG metrics, respectively.
Our similarity measure algorithm seems to be robust, in particular compared to
Divergence From Randomness framework, having smaller performance variations
under different training conditions. Finally, the result categorization did not
have significant impact on the system's performance. We believe that our
solution could be used to enhance biomedical dataset management systems. In
particular, the use of data driven query expansion methods could be an
alternative to the complexity of biomedical terminologies.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adversarial Attacks on Neural Networks for Graph Data | Deep learning models for graphs have achieved strong performance for the task
of node classification. Despite their proliferation, currently there is no
study of their robustness to adversarial attacks. Yet, in domains where they
are likely to be used, e.g. the web, adversaries are common. Can deep learning
models for graphs be easily fooled? In this work, we introduce the first study
of adversarial attacks on attributed graphs, specifically focusing on models
exploiting ideas of graph convolutions. In addition to attacks at test time, we
tackle the more challenging class of poisoning/causative attacks, which focus
on the training phase of a machine learning model. We generate adversarial
perturbations targeting the node's features and the graph structure, thus,
taking the dependencies between instances in account. Moreover, we ensure that
the perturbations remain unnoticeable by preserving important data
characteristics. To cope with the underlying discrete domain we propose an
efficient algorithm Nettack exploiting incremental computations. Our
experimental study shows that accuracy of node classification significantly
drops even when performing only few perturbations. Even more, our attacks are
transferable: the learned attacks generalize to other state-of-the-art node
classification models and unsupervised approaches, and likewise are successful
even when only limited knowledge about the graph is given.
| 0 | 0 | 0 | 1 | 0 | 0 |
Electromagnetic energy, momentum and forces in a dielectric medium with losses | From the energy-momentum tensors of the electromagnetic field and the
mechanical energy-momentum, the equations of energy conservation and balance of
electromagnetic and mechanical forces are obtained. The equation for the
Abraham force in a dielectric medium with losses is obtained
| 0 | 1 | 0 | 0 | 0 | 0 |
Thermotronics: toward nanocircuits to manage radiative heat flux | The control of electric currents in solids is at the origin of the modern
electronics revolution which has driven our daily life since the second half of
20th century. Surprisingly, to date, there is no thermal analog for a control
of heat flux. Here, we summarize the very last developments carried out in this
direction to control heat exchanges by radiation both in near and far-field in
complex architecture networks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multiplication and Presence of Shielding Material from Time-Correlated Pulse-Height Measurements of Subcritical Plutonium Assemblies | We present the results from the first measurements of the Time-Correlated
Pulse-Height (TCPH) distributions from 4.5 kg sphere of $\alpha$-phase
weapons-grade plutonium metal in five configurations: bare, reflected by 1.27
cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new
method for characterizing source multiplication and shielding configuration is
also demonstrated. The method relies on solving for the underlying fission
chain timing distribution that drives the spreading of the measured TCPH
distribution. We found that a gamma distribution fits the fission chain timing
distribution well and that the fit parameters correlate with both
multiplication (rate parameter) and shielding material types (shape parameter).
The source-to-detector distance was another free parameter that we were able to
optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi
simulations were used to complement the measurements and help illustrate trends
in these parameters and their relation to multiplication and the amount and
type of material coupled to the subcritical assembly.
| 0 | 1 | 0 | 0 | 0 | 0 |
General Refraction Problems with Phase Discontinuity | This paper provides a mathematical approach to study metasurfaces in non flat
geometries. Analytical conditions between the curvature of the surface and the
set of refracted directions are introduced to guarantee the existence of phase
discontinuities. The approach contains both the near and far field cases. A
starting point is the formulation of a vector Snell law in presence of abrupt
discontinuities on the interfaces.
| 0 | 1 | 1 | 0 | 0 | 0 |
An Efficiently Searchable Encrypted Data Structure for Range Queries | At CCS 2015 Naveed et al. presented first attacks on efficiently searchable
encryption, such as deterministic and order-preserving encryption. These
plaintext guessing attacks have been further improved in subsequent work, e.g.
by Grubbs et al. in 2016. Such cryptanalysis is crucially important to sharpen
our understanding of the implications of security models. In this paper we
present an efficiently searchable, encrypted data structure that is provably
secure against these and even more powerful chosen plaintext attacks. Our data
structure supports logarithmic-time search with linear space complexity. The
indices of our data structure can be used to search by standard comparisons and
hence allow easy retrofitting to existing database management systems. We
implemented our scheme and show that its search time overhead is only 10
milliseconds compared to non-secure search.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hyperfine state entanglement of spinor BEC and scattering atom | Condensate of spin-1 atoms frozen in a unique spatial mode may possess large
internal degrees of freedom. The scattering amplitudes of polarized cold atoms
scattered by the condensate are obtained with the method of fractional
parentage coefficients that treats the spin degrees of freedom rigorously.
Channels with scattering cross sections enhanced by square of atom number of
the condensate are found. Entanglement between the condensate and the
propagating atom can be established by the scattering. The entanglement entropy
is analytically obtained for arbitrary initial states. Our results also give
hint for the establishment of quantum thermal ensembles in the hyperfine space.
| 0 | 1 | 0 | 0 | 0 | 0 |
Making Neural Programming Architectures Generalize via Recursion | Empirically, neural networks that attempt to learn programs from data have
exhibited poor generalizability. Moreover, it has traditionally been difficult
to reason about the behavior of these models beyond a certain level of input
complexity. In order to address these issues, we propose augmenting neural
architectures with a key abstraction: recursion. As an application, we
implement recursion in the Neural Programmer-Interpreter framework on four
tasks: grade-school addition, bubble sort, topological sort, and quicksort. We
demonstrate superior generalizability and interpretability with small amounts
of training data. Recursion divides the problem into smaller pieces and
drastically reduces the domain of each neural network component, making it
tractable to prove guarantees about the overall system's behavior. Our
experience suggests that in order for neural architectures to robustly learn
program semantics, it is necessary to incorporate a concept like recursion.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bagged Empirical Null p-values: A Method to Account for Model Uncertainty in Large Scale Inference | When conducting large scale inference, such as genome-wide association
studies or image analysis, nominal $p$-values are often adjusted to improve
control over the family-wise error rate (FWER). When the majority of tests are
null, procedures controlling the False discovery rate (Fdr) can be improved by
replacing the theoretical global null with its empirical estimate. However,
these other adjustment procedures remain sensitive to the working model
assumption. Here we propose two key ideas to improve inference in this space.
First, we propose $p$-values that are standardized to the empirical null
distribution (instead of the theoretical null). Second, we propose model
averaging $p$-values by bootstrap aggregation (Bagging) to account for model
uncertainty and selection procedures. The combination of these two key ideas
yields bagged empirical null $p$-values (BEN $p$-values) that often
dramatically alter the rank ordering of significant findings. Moreover, we find
that a multidimensional selection criteria based on BEN $p$-values and bagged
model fit statistics is more likely to yield reproducible findings. A
re-analysis of the famous Golub Leukemia data is presented to illustrate these
ideas. We uncovered new findings in these data, not detected previously, that
are backed by published bench work pre-dating the Gloub experiment. A
pseudo-simulation using the leukemia data is also presented to explore the
stability of this approach under broader conditions, and illustrates the
superiority of the BEN $p$-values compared to the other approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
Identifying Vessel Branching from Fluid Stresses on Microscopic Robots | Objects moving in fluids experience patterns of stress on their surfaces
determined by the geometry of nearby boundaries. Flows at low Reynolds number,
as occur in microscopic vessels such as capillaries in biological tissues, have
relatively simple relations between stresses and nearby vessel geometry. Using
these relations, this paper shows how a microscopic robot moving with such
flows can use changes in stress on its surface to identify when it encounters
vessel branches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Impact of Optimal Storage Allocation on Price Volatility in Electricity Markets | Recent studies show that the fast growing expansion of wind power generation
may lead to extremely high levels of price volatility in wholesale electricity
markets. Storage technologies, regardless of their specific forms e.g.
pump-storage hydro, large-scale or distributed batteries, are capable of
alleviating the extreme price volatility levels due to their energy usage time
shifting, fast-ramping and price arbitrage capabilities. In this paper, we
propose a stochastic bi-level optimization model to find the optimal nodal
storage capacities required to achieve a certain price volatility level in a
highly volatile electricity market. The decision on storage capacities is made
in the upper level problem and the operation of strategic/regulated generation,
storage and transmission players is modeled at the lower level problem using an
extended Cournot-based stochastic game. The South Australia (SA) electricity
market, which has recently experienced high levels of price volatility, is
considered as the case study for the proposed storage allocation framework. Our
numerical results indicate that 80% price volatility reduction in SA
electricity market can be achieved by installing either 340 MWh regulated
storage or 420 MWh strategic storage. In other words, regulated storage firms
are more efficient in reducing the price volatility than strategic storage
firms.
| 0 | 0 | 1 | 0 | 0 | 0 |
An independent axiomatisation for free short-circuit logic | Short-circuit evaluation denotes the semantics of propositional connectives
in which the second argument is evaluated only if the first argument does not
suffice to determine the value of the expression. Free short-circuit logic is
the equational logic in which compound statements are evaluated from left to
right, while atomic evaluations are not memorised throughout the evaluation,
i.e., evaluations of distinct occurrences of an atom in a compound statement
may yield different truth values. We provide a simple semantics for free SCL
and an independent axiomatisation. Finally, we discuss evaluation strategies,
some other SCLs, and side effects.
| 1 | 0 | 1 | 0 | 0 | 0 |
A Note on a Communication Game | We describe a communication game, and a conjecture about this game, whose
proof would imply the well-known Sensitivity Conjecture asserting a polynomial
relation between sensitivity and block sensitivity for Boolean functions. The
author defined this game and observed the connection in Dec. 2013 - Jan. 2014.
The game and connection were independently discovered by Gilmer, Koucký, and
Saks, who also established further results about the game (not proved by us)
and published their results in ITCS '15 [GKS15].
This note records our independent work, including some observations that did
not appear in [GKS15]. Namely, the main conjecture about this communication
game would imply not only the Sensitivity Conjecture, but also a stronger
hypothesis raised by Chung, Füredi, Graham, and Seymour [CFGS88]; and,
another related conjecture we pose about a "query-bounded" variant of our
communication game would suffice to answer a question of Aaronson, Ambainis,
Balodis, and Bavarian [AABB14] about the query complexity of the "Weak Parity"
problem---a question whose resolution was previously shown by [AABB14] to
follow from a proof of the Chung et al. hypothesis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Iteration of Quadratic Polynomials Over Finite Fields | For a finite field of odd cardinality $q$, we show that the sequence of
iterates of $aX^2+c$, starting at $0$, always recurs after $O(q/\log\log q)$
steps. For $X^2+1$ the same is true for any starting value. We suggest that the
traditional "Birthday Paradox" model is inappropriate for iterates of $X^3+c$,
when $q$ is 2 mod 3.
| 0 | 0 | 1 | 0 | 0 | 0 |
Constraints on the Growth and Spin of the Supermassive Black Hole in M32 From High Cadence Visible Light Observations | We present 1-second cadence observations of M32 (NGC221) with the CHIMERA
instrument at the Hale 200-inch telescope of the Palomar Observatory. Using
field stars as a baseline for relative photometry, we are able to construct a
light curve of the nucleus in the g-prime and r-prime band with 1sigma=36
milli-mag photometric stability. We derive a temporal power spectrum for the
nucleus and find no evidence for a time-variable signal above the noise as
would be expected if the nuclear black hole were accreting gas. Thus, we are
unable to constrain the spin of the black hole although future work will use
this powerful instrument to target more actively accreting black holes. Given
the black hole mass of (2.5+/-0.5)*10^6 Msun inferred from stellar kinematics,
the absence of a contribution from a nuclear time-variable signal places an
upper limit on the accretion rate which is 4.6*10^{-8} of the Eddington rate, a
factor of two more stringent than past upper limits from HST. The low mass of
the black hole despite the high stellar density suggests that the gas liberated
by stellar interactions was primarily at early cosmic times when the low-mass
black hole had a small Eddington luminosity. This is at least partly driven by
a top-heavy stellar initial mass function at early cosmic times which is an
efficient producer of stellar mass black holes. The implication is that
supermassive black holes likely arise from seeds formed through the coalescence
of 3-100 Msun mass black holes that then accrete gas produced through stellar
interaction processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sampling for Approximate Bipartite Network Projection | Bipartite networks manifest as a stream of edges that represent transactions,
e.g., purchases by retail customers. Many machine learning applications employ
neighborhood-based measures to characterize the similarity among the nodes,
such as the pairwise number of common neighbors (CN) and related metrics. While
the number of node pairs that share neighbors is potentially enormous, only a
relatively small proportion of them have many common neighbors. This motivates
finding a weighted sampling approach to preferentially sample these node pairs.
This paper presents a new sampling algorithm that provides a fixed size
unbiased estimate of the similarity matrix resulting from a bipartite graph
stream projection. The algorithm has two components. First, it maintains a
reservoir of sampled bipartite edges with sampling weights that favor selection
of high similarity nodes. Second, arriving edges generate a stream of
\textsl{similarity updates} based on their adjacency with the current sample.
These updates are aggregated in a second reservoir sample-based stream
aggregator to yield the final unbiased estimate. Experiments on real world
graphs show that a 10% sample at each stage yields estimates of high similarity
edges with weighted relative errors of about 1%.
| 1 | 0 | 1 | 0 | 0 | 0 |
Navigate, Understand, Communicate: How Developers Locate Performance Bugs | Background: Performance bugs can lead to severe issues regarding computation
efficiency, power consumption, and user experience. Locating these bugs is a
difficult task because developers have to judge for every costly operation
whether runtime is consumed necessarily or unnecessarily. Objective: We wanted
to investigate how developers, when locating performance bugs, navigate through
the code, understand the program, and communicate the detected issues. Method:
We performed a qualitative user study observing twelve developers trying to fix
documented performance bugs in two open source projects. The developers worked
with a profiling and analysis tool that visually depicts runtime information in
a list representation and embedded into the source code view. Results: We
identified typical navigation strategies developers used for pinpointing the
bug, for instance, following method calls based on runtime consumption. The
integration of visualization and code helped developers to understand the bug.
Sketches visualizing data structures and algorithms turned out to be valuable
for externalizing and communicating the comprehension process for complex bugs.
Conclusion: Fixing a performance bug is a code comprehension and navigation
problem. Flexible navigation features based on executed methods and a close
integration of source code and performance information support the process.
| 1 | 0 | 0 | 0 | 0 | 0 |
MEXIT: Maximal un-coupling times for stochastic processes | Classical coupling constructions arrange for copies of the \emph{same} Markov
process started at two \emph{different} initial states to become equal as soon
as possible. In this paper, we consider an alternative coupling framework in
which one seeks to arrange for two \emph{different} Markov (or other
stochastic) processes to remain equal for as long as possible, when started in
the \emph{same} state. We refer to this "un-coupling" or "maximal agreement"
construction as \emph{MEXIT}, standing for "maximal exit". After highlighting
the importance of un-coupling arguments in a few key statistical and
probabilistic settings, we develop an explicit \MEXIT construction for
stochastic processes in discrete time with countable state-space. This
construction is generalized to random processes on general state-space running
in continuous time, and then exemplified by discussion of \MEXIT for Brownian
motions with two different constant drifts.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Effective Changes for Software Projects | The primary motivation of much of software analytics is decision making. How
to make these decisions? Should one make decisions based on lessons that arise
from within a particular project? Or should one generate these decisions from
across multiple projects? This work is an attempt to answer these questions.
Our work was motivated by a realization that much of the current generation
software analytics tools focus primarily on prediction. Indeed prediction is a
useful task, but it is usually followed by "planning" about what actions need
to be taken. This research seeks to address the planning task by seeking
methods that support actionable analytics that offer clear guidance on what to
do. Specifically, we propose XTREE and BELLTREE algorithms for generating a set
of actionable plans within and across projects. Each of these plans, if
followed will improve the quality of the software project.
| 1 | 0 | 0 | 0 | 0 | 0 |
Aggregating multiple types of complex data in stock market prediction: A model-independent framework | The increasing richness in volume, and especially types of data in the
financial domain provides unprecedented opportunities to understand the stock
market more comprehensively and makes the price prediction more accurate than
before. However, they also bring challenges to classic statistic approaches
since those models might be constrained to a certain type of data. Aiming at
aggregating differently sourced information and offering type-free capability
to existing models, a framework for predicting stock market of scenarios with
mixed data, including scalar data, compositional data (pie-like) and functional
data (curve-like), is established. The presented framework is
model-independent, as it serves like an interface to multiple types of data and
can be combined with various prediction models. And it is proved to be
effective through numerical simulations. Regarding to price prediction, we
incorporate the trading volume (scalar data), intraday return series
(functional data), and investors' emotions from social media (compositional
data) through the framework to competently forecast whether the market goes up
or down at opening in the next day. The strong explanatory power of the
framework is further demonstrated. Specifically, it is found that the intraday
returns impact the following opening prices differently between bearish market
and bullish market. And it is not at the beginning of the bearish market but
the subsequent period in which the investors' "fear" comes to be indicative.
The framework would help extend existing prediction models easily to scenarios
with multiple types of data and shed light on a more systemic understanding of
the stock market.
| 0 | 0 | 0 | 1 | 0 | 1 |
Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction | Taxi demand prediction is an important building block to enabling intelligent
transportation systems in a smart city. An accurate prediction model can help
the city pre-allocate resources to meet travel demand and to reduce empty taxis
on streets which waste energy and worsen the traffic congestion. With the
increasing popularity of taxi requesting services such as Uber and Didi Chuxing
(in China), we are able to collect large-scale taxi demand data continuously.
How to utilize such big data to improve the demand prediction is an interesting
and critical real-world problem. Traditional demand prediction methods mostly
rely on time series forecasting techniques, which fail to model the complex
non-linear spatial and temporal relations. Recent advances in deep learning
have shown superior performance on traditionally challenging tasks such as
image classification by learning the complex features and correlations from
large-scale data. This breakthrough has inspired researchers to explore deep
learning techniques on traffic prediction problems. However, existing methods
on traffic prediction have only considered spatial relation (e.g., using CNN)
or temporal relation (e.g., using LSTM) independently. We propose a Deep
Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial
and temporal relations. Specifically, our proposed model consists of three
views: temporal view (modeling correlations between future demand values with
near time points via LSTM), spatial view (modeling local spatial correlation
via local CNN), and semantic view (modeling correlations among regions sharing
similar temporal patterns). Experiments on large-scale real taxi demand data
demonstrate effectiveness of our approach over state-of-the-art methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Interstellar communication. VII. Benchmarking inscribed matter probes | We have explored the optimal frequency of interstellar photon communications
and benchmarked other particles as information carriers in previous papers of
this series. We now compare the latency and bandwidth of sending probes with
inscribed matter. Durability requirements such as shields against dust and
radiation, as well as data duplication, add negligible weight overhead at
velocities <0.2c. Probes may arrive in full, while most of a photon beam is
lost to diffraction. Probes can be more energy efficient per bit, and can have
higher bandwidth, compared to classical communication, unless a photon receiver
is placed in a stellar gravitational lens. The probe's advantage dominates by
order of magnitude for long distances (kpc) and low velocities (<0.1c) at the
cost of higher latency.
| 0 | 1 | 0 | 0 | 0 | 0 |
A description length approach to determining the number of k-means clusters | We present an asymptotic criterion to determine the optimal number of
clusters in k-means. We consider k-means as data compression, and propose to
adopt the number of clusters that minimizes the estimated description length
after compression. Here we report two types of compression ratio based on two
ways to quantify the description length of data after compression. This
approach further offers a way to evaluate whether clusters obtained with
k-means have a hierarchical structure by examining whether multi-stage
compression can further reduce the description length. We applied our criteria
to determine the number of clusters to synthetic data and empirical
neuroimaging data to observe the behavior of the criteria across different
types of data set and suitability of the two types of criteria for different
datasets. We found that our method can offer reasonable clustering results that
are useful for dimension reduction. While our numerical results revealed
dependency of our criteria on the various aspects of dataset such as the
dimensionality, the description length approach proposed here provides a useful
guidance to determine the number of clusters in a principled manner when
underlying properties of the data are unknown and only inferred from
observation of data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Complementary legs and rational balls | In this note we study the Seifert rational homology spheres with two
complementary legs, i.e. with a pair of invariants whose fractions add up to
one. We give a complete classification of the Seifert manifolds with 3
exceptional fibers and two complementary legs which bound rational homology
balls. The result translates in a statement on the sliceness of some Montesinos
knots.
| 0 | 0 | 1 | 0 | 0 | 0 |
Gravitational Waves from Stellar Black Hole Binaries and the Impact on Nearby Sun-like Stars | We investigate the impact of resonant gravitational waves on quadrupole
acoustic modes of Sun-like stars located nearby stellar black hole binary
systems (such as GW150914 and GW151226). We find that the stimulation of the
low-overtone modes by gravitational radiation can lead to sizeable photometric
amplitude variations, much larger than the predictions for amplitudes driven by
turbulent convection, which in turn are consistent with the photometric
amplitudes observed in most Sun-like stars. For accurate stellar evolution
models, using up-to-date stellar physics, we predict photometric amplitude
variations of $1$ -- $10^3$ ppm for a solar mass star located at a distance
between 1 au and 10 au from the black hole binary, and belonging to the same
multi-star system. The observation of such a phenomenon will be within the
reach of the Plato mission because telescope will observe several portions of
the Milky Way, many of which are regions of high stellar density with a
substantial mixed population of Sun-like stars and black hole binaries.
| 0 | 1 | 0 | 0 | 0 | 0 |
Search for Exoplanets around Northern Circumpolar Stars- II. The Detection of Radial Velocity Variations in M Giant Stars HD 36384, HD 52030, and HD 208742 | We present the detection of long-period RV variations in HD 36384, HD 52030,
and HD 208742 by using the high-resolution, fiber-fed Bohyunsan Observatory
Echelle Spectrograph (BOES) for the precise radial velocity (RV) survey of
about 200 northern circumpolar stars. Analyses of RV data, chromospheric
activity indicators, and bisector variations spanning about five years suggest
that the RV variations are compatible with planet or brown dwarf companions in
Keplerian motion. However, HD 36384 shows photometric variations with a period
very close to that of RV variations as well as amplitude variations in the
weighted wavelet Z-transform (WWZ) analysis, which argues that the RV
variations in HD~36384 are from the stellar pulsations. Assuming that the
companion hypothesis is correct, HD~52030 hosts a companion with minimum mass
13.3 M_Jup$ orbiting in 484 days at a distance of 1.2 AU. HD~208742 hosts a
companion of 14.0 M_Jup at 1.5 AU with a period of 602 days. All stars are
located at the asymptotic giant branch (AGB) stage on the H-R diagram after
undergone the helium flash and left the giant clump.With stellar radii of 53.0
R_Sun and 57.2 R_Sun for HD 52030 and HD 208742, respectively, these stars may
be the largest yet, in terms of stellar radius, found to host sub-stellar
companions. However, given possible RV amplitude variations and the fact that
these are highly evolved stars the planet hypothesis is not yet certain.
| 0 | 1 | 0 | 0 | 0 | 0 |
Measuring Item Similarity in Introductory Programming: Python and Robot Programming Case Studies | A personalized learning system needs a large pool of items for learners to
solve. When working with a large pool of items, it is useful to measure the
similarity of items. We outline a general approach to measuring the similarity
of items and discuss specific measures for items used in introductory
programming. Evaluation of quality of similarity measures is difficult. To this
end, we propose an evaluation approach utilizing three levels of abstraction.
We illustrate our approach to measuring similarity and provide evaluation using
items from three diverse programming environments.
| 0 | 0 | 0 | 1 | 0 | 0 |
Term Models of Horn Clauses over Rational Pavelka Predicate Logic | This paper is a contribution to the study of the universal Horn fragment of
predicate fuzzy logics, focusing on the proof of the existence of free models
of theories of Horn clauses over Rational Pavelka predicate logic. We define
the notion of a term structure associated to every consistent theory T over
Rational Pavelka predicate logic and we prove that the term models of T are
free on the class of all models of T. Finally, it is shown that if T is a set
of Horn clauses, the term structure associated to T is a model of T.
| 0 | 0 | 1 | 0 | 0 | 0 |
Galaxy Rotation and Supermassive Black Hole Binary Evolution | Supermassive black hole (SMBH) binaries residing at the core of merging
galaxies are recently found to be strongly affected by the rotation of their
host galaxies. The highly eccentric orbits that form when the host is
counterrotating emit strong bursts of gravitational waves that propel rapid
SMBH binary coalescence. Most prior work, however, focused on planar orbits and
a uniform rotation profile, an unlikely interaction configuration. However, the
coupling between rotation and SMBH binary evolution appears to be such a strong
dynamical process that it warrants further investigation. This study uses
direct N-body simulations to isolate the effect of galaxy rotation in more
realistic interactions. In particular, we systematically vary the SMBH orbital
plane with respect to the galaxy rotation axis, the radial extent of the
rotating component, and the initial eccentricity of the SMBH binary orbit. We
find that the initial orbital plane orientation and eccentricity alone can
change the inspiral time by an order of magnitude. Because SMBH binary inspiral
and merger is such a loud gravitational wave source, these studies are critical
for the future gravitational wave detector, LISA, an ESA/NASA mission currently
set to launch by 2034.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Formal Approach to Exploiting Multi-Stage Attacks based on File-System Vulnerabilities of Web Applications (Extended Version) | Web applications require access to the file-system for many different tasks.
When analyzing the security of a web application, secu- rity analysts should
thus consider the impact that file-system operations have on the security of
the whole application. Moreover, the analysis should take into consideration
how file-system vulnerabilities might in- teract with other vulnerabilities
leading an attacker to breach into the web application. In this paper, we first
propose a classification of file- system vulnerabilities, and then, based on
this classification, we present a formal approach that allows one to exploit
file-system vulnerabilities. We give a formal representation of web
applications, databases and file- systems, and show how to reason about
file-system vulnerabilities. We also show how to combine file-system
vulnerabilities and SQL-Injection vulnerabilities for the identification of
complex, multi-stage attacks. We have developed an automatic tool that
implements our approach and we show its efficiency by discussing several
real-world case studies, which are witness to the fact that our tool can
generate, and exploit, complex attacks that, to the best of our knowledge, no
other state-of-the-art-tool for the security of web applications can find.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks | Fully realizing the potential of acceleration for Deep Neural Networks (DNNs)
requires understanding and leveraging algorithmic properties. This paper builds
upon the algorithmic insight that bitwidth of operations in DNNs can be reduced
without compromising their classification accuracy. However, to prevent
accuracy loss, the bitwidth varies significantly across DNNs and it may even be
adjusted for each layer. Thus, a fixed-bitwidth accelerator would either offer
limited benefits to accommodate the worst-case bitwidth requirements, or lead
to a degradation in final accuracy. To alleviate these deficiencies, this work
introduces dynamic bit-level fusion/decomposition as a new dimension in the
design of DNN accelerators. We explore this dimension by designing Bit Fusion,
a bit-flexible accelerator, that constitutes an array of bit-level processing
elements that dynamically fuse to match the bitwidth of individual DNN layers.
This flexibility in the architecture enables minimizing the computation and the
communication at the finest granularity possible with no loss in accuracy. We
evaluate the benefits of BitFusion using eight real-world feed-forward and
recurrent DNNs. The proposed microarchitecture is implemented in Verilog and
synthesized in 45 nm technology. Using the synthesis results and cycle accurate
simulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN
accelerators, Eyeriss and Stripes. In the same area, frequency, and process
technology, BitFusion offers 3.9x speedup and 5.1x energy savings over Eyeriss.
Compared to Stripes, BitFusion provides 2.6x speedup and 3.9x energy reduction
at 45 nm node when BitFusion area and frequency are set to those of Stripes.
Scaling to GPU technology node of 16 nm, BitFusion almost matches the
performance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while
BitFusion merely consumes 895 milliwatts of power.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multiple Access Wiretap Channel with Noiseless Feedback | The physical layer security in the up-link of the wireless communication
systems is often modeled as the multiple access wiretap channel (MAC-WT), and
recently it has received a lot attention. In this paper, the MAC-WT has been
re-visited by considering the situation that the legitimate receiver feeds his
received channel output back to the transmitters via two noiseless channels,
respectively. This model is called the MAC-WT with noiseless feedback. Inner
and outer bounds on the secrecy capacity region of this feedback model are
provided. To be specific, we first present a decode-and-forward (DF) inner
bound on the secrecy capacity region of this feedback model, and this bound is
constructed by allowing each transmitter to decode the other one's transmitted
message from the feedback, and then each transmitter uses the decoded message
to re-encode his own messages, i.e., this DF inner bound allows the independent
transmitters to co-operate with each other. Then, we provide a hybrid inner
bound which is strictly larger than the DF inner bound, and it is constructed
by using the feedback as a tool not only to allow the independent transmitters
to co-operate with each other, but also to generate two secret keys
respectively shared between the legitimate receiver and the two transmitters.
Finally, we give a sato-type outer bound on the secrecy capacity region of this
feedback model. The results of this paper are further explained via a Gaussian
example.
| 1 | 0 | 1 | 0 | 0 | 0 |
Inter-Subject Analysis: Inferring Sparse Interactions with Dense Intra-Graphs | We develop a new modeling framework for Inter-Subject Analysis (ISA). The
goal of ISA is to explore the dependency structure between different subjects
with the intra-subject dependency as nuisance. It has important applications in
neuroscience to explore the functional connectivity between brain regions under
natural stimuli. Our framework is based on the Gaussian graphical models, under
which ISA can be converted to the problem of estimation and inference of the
inter-subject precision matrix. The main statistical challenge is that we do
not impose sparsity constraint on the whole precision matrix and we only assume
the inter-subject part is sparse. For estimation, we propose to estimate an
alternative parameter to get around the non-sparse issue and it can achieve
asymptotic consistency even if the intra-subject dependency is dense. For
inference, we propose an "untangle and chord" procedure to de-bias our
estimator. It is valid without the sparsity assumption on the inverse Hessian
of the log-likelihood function. This inferential method is general and can be
applied to many other statistical problems, thus it is of independent
theoretical interest. Numerical experiments on both simulated and brain imaging
data validate our methods and theory.
| 0 | 0 | 1 | 1 | 0 | 0 |
Impact of surface functionalisation on the quantum coherence of nitrogen vacancy centres in nanodiamond | Nanoscale quantum probes such as the nitrogen-vacancy centre in diamond have
demonstrated remarkable sensing capabilities over the past decade as control
over the fabrication and manipulation of these systems has evolved. However, as
the size of these nanoscale quantum probes is reduced, the surface termination
of the host material begins to play a prominent role as a source of magnetic
and electric field noise. In this work, we show that borane-reduced nanodiamond
surfaces can on average double the spin relaxation time of individual
nitrogen-vacancy centres in nanodiamonds when compared to the thermally
oxidised surfaces. Using a combination of infra-red and x-ray absorption
spectroscopy techniques, we correlate the changes in quantum relaxation rates
with the conversion of sp2 carbon to C-O and C-H bonds on the diamond surface.
These findings implicate double-bonded carbon species as a dominant source of
spin noise for near surface NV centres and show that through tailored
engineering of the surface, we can improve the quantum properties and magnetic
sensitivity of these nanoscale probes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nesterov's Acceleration For Approximate Newton | Optimization plays a key role in machine learning. Recently, stochastic
second-order methods have attracted much attention due to their low
computational cost in each iteration. However, these algorithms might perform
poorly especially if it is hard to approximate the Hessian well and
efficiently. As far as we know, there is no effective way to handle this
problem. In this paper, we resort to Nesterov's acceleration technique to
improve the convergence performance of a class of second-order methods called
approximate Newton. We give a theoretical analysis that Nesterov's acceleration
technique can improve the convergence performance for approximate Newton just
like for first-order methods. We accordingly propose an accelerated regularized
sub-sampled Newton. Our accelerated algorithm performs much better than the
original regularized sub-sampled Newton in experiments, which validates our
theory empirically. Besides, the accelerated regularized sub-sampled Newton has
good performance comparable to or even better than classical algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Coverage Centrality Maximization in Undirected Networks | Centrality metrics are among the main tools in social network analysis. Being
central for a user of a network leads to several benefits to the user: central
users are highly influential and play key roles within the network. Therefore,
the optimization problem of increasing the centrality of a network user
recently received considerable attention. Given a network and a target user
$v$, the centrality maximization problem consists in creating $k$ new links
incident to $v$ in such a way that the centrality of $v$ is maximized,
according to some centrality metric. Most of the algorithms proposed in the
literature are based on showing that a given centrality metric is monotone and
submodular with respect to link addition. However, this property does not hold
for several shortest-path based centrality metrics if the links are undirected.
In this paper we study the centrality maximization problem in undirected
networks for one of the most important shortest-path based centrality measures,
the coverage centrality. We provide several hardness and approximation results.
We first show that the problem cannot be approximated within a factor greater
than $1-1/e$, unless $P=NP$, and, under the stronger gap-ETH hypothesis, the
problem cannot be approximated within a factor better than $1/n^{o(1)}$, where
$n$ is the number of users. We then propose two greedy approximation
algorithms, and show that, by suitably combining them, we can guarantee an
approximation factor of $\Omega(1/\sqrt{n})$. We experimentally compare the
solutions provided by our approximation algorithm with optimal solutions
computed by means of an exact IP formulation. We show that our algorithm
produces solutions that are very close to the optimum.
| 1 | 0 | 0 | 0 | 0 | 0 |
Color difference makes a difference: four planet candidates around tau Ceti | The removal of noise typically correlated in time and wavelength is one of
the main challenges for using the radial velocity method to detect Earth
analogues. We analyze radial velocity data of tau Ceti and find robust evidence
for wavelength dependent noise. We find this noise can be modeled by a
combination of moving average models and "differential radial velocities". We
apply this noise model to various radial velocity data sets for tau Ceti, and
find four periodic signals at 20.0, 49.3, 160 and 642 d which we interpret as
planets. We identify two new signals with orbital periods of 20.0 and 49.3 d
while the other two previously suspected signals around 160 and 600 d are
quantified to a higher precision. The 20.0 d candidate is independently
detected in KECK data. All planets detected in this work have minimum masses
less than 4$M_\oplus$ with the two long period ones located around the inner
and outer edges of the habitable zone, respectively. We find that the
instrumental noise gives rise to a precision limit of the HARPS around 0.2 m/s.
We also find correlation between the HARPS data and the central moments of the
spectral line profile at around 0.5 m/s level, although these central moments
may contain both noise and signals. The signals detected in this work have
semi-amplitudes as low as 0.3 m/s, demonstrating the ability of the radial
velocity technique to detect relatively weak signals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Recurrent Deep Embedding Networks for Genotype Clustering and Ethnicity Prediction | The understanding of variations in genome sequences assists us in identifying
people who are predisposed to common diseases, solving rare diseases, and
finding the corresponding population group of the individuals from a larger
population group. Although classical machine learning techniques allow
researchers to identify groups (i.e. clusters) of related variables, the
accuracy, and effectiveness of these methods diminish for large and
high-dimensional datasets such as the whole human genome. On the other hand,
deep neural network architectures (the core of deep learning) can better
exploit large-scale datasets to build complex models. In this paper, we use the
K-means clustering approach for scalable genomic data analysis aiming towards
clustering genotypic variants at the population scale. Finally, we train a deep
belief network (DBN) for predicting the geographic ethnicity. We used the
genotype data from the 1000 Genomes Project, which covers the result of genome
sequencing for 2504 individuals from 26 different ethnic origins and comprises
84 million variants. Our experimental results, with a focus on accuracy and
scalability, show the effectiveness and superiority compared to the
state-of-the-art.
| 0 | 0 | 0 | 0 | 1 | 0 |
Measuring and avoiding side effects using relative reachability | How can we design reinforcement learning agents that avoid causing
unnecessary disruptions to their environment? We argue that current approaches
to penalizing side effects can introduce bad incentives in tasks that require
irreversible actions, and in environments that contain sources of change other
than the agent. For example, some approaches give the agent an incentive to
prevent any irreversible changes in the environment, including the actions of
other agents. We introduce a general definition of side effects, based on
relative reachability of states compared to a default state, that avoids these
undesirable incentives. Using a set of gridworld experiments illustrating
relevant scenarios, we empirically compare relative reachability to penalties
based on existing definitions and show that it is the only penalty among those
tested that produces the desired behavior in all the scenarios.
| 0 | 0 | 0 | 1 | 0 | 0 |
Neural State Classification for Hybrid Systems | We introduce the State Classification Problem (SCP) for hybrid systems, and
present Neural State Classification (NSC) as an efficient solution technique.
SCP generalizes the model checking problem as it entails classifying each state
$s$ of a hybrid automaton as either positive or negative, depending on whether
or not $s$ satisfies a given time-bounded reachability specification. This is
an interesting problem in its own right, which NSC solves using
machine-learning techniques, Deep Neural Networks in particular. State
classifiers produced by NSC tend to be very efficient (run in constant time and
space), but may be subject to classification errors. To quantify and mitigate
such errors, our approach comprises: i) techniques for certifying, with
statistical guarantees, that an NSC classifier meets given accuracy levels; ii)
tuning techniques, including a novel technique based on adversarial sampling,
that can virtually eliminate false negatives (positive states classified as
negative), thereby making the classifier more conservative. We have applied NSC
to six nonlinear hybrid system benchmarks, achieving an accuracy of 99.25% to
99.98%, and a false-negative rate of 0.0033 to 0, which we further reduced to
0.0015 to 0 after tuning the classifier. We believe that this level of accuracy
is acceptable in many practical applications, and that these results
demonstrate the promise of the NSC approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimization of Tree Ensembles | Tree ensemble models such as random forests and boosted trees are among the
most widely used and practically successful predictive models in applied
machine learning and business analytics. Although such models have been used to
make predictions based on exogenous, uncontrollable independent variables, they
are increasingly being used to make predictions where the independent variables
are controllable and are also decision variables. In this paper, we study the
problem of tree ensemble optimization: given a tree ensemble that predicts some
dependent variable using controllable independent variables, how should we set
these variables so as to maximize the predicted value? We formulate the problem
as a mixed-integer optimization problem. We theoretically examine the strength
of our formulation, provide a hierarchy of approximate formulations with bounds
on approximation quality and exploit the structure of the problem to develop
two large-scale solution methods, one based on Benders decomposition and one
based on iteratively generating tree split constraints. We test our methodology
on real data sets, including two case studies in drug design and customized
pricing, and show that our methodology can efficiently solve large-scale
instances to near or full optimality, and outperforms solutions obtained by
heuristic approaches. In our drug design case, we show how our approach can
identify compounds that efficiently trade-off predicted performance and novelty
with respect to existing, known compounds. In our customized pricing case, we
show how our approach can efficiently determine optimal store-level prices
under a random forest model that delivers excellent predictive accuracy.
| 1 | 0 | 1 | 1 | 0 | 0 |
Equations of state for real gases on the nuclear scale | The formalism to augment the classical models of equation of state for real
gases with the quantum statistical effects is presented. It allows an arbitrary
excluded volume procedure to model repulsive interactions, and an arbitrary
density-dependent mean field to model attractive interactions. Variations on
the excluded volume mechanism include van der Waals (VDW) and Carnahan-Starling
models, while the mean fields are based on VDW, Redlich-Kwong-Soave,
Peng-Robinson, and Clausius equations of state. The VDW parameters of the
nucleon-nucleon interaction are fitted in each model to the properties of the
ground state of nuclear matter, and the following range of values is obtained:
$a = 330 - 430$ MeV fm$^3$ and $b = 2.5 - 4.4$ fm$^3$. In the context of the
excluded-volume approach, the fits to the nuclear ground state disfavor the
values of the effective hard-core radius of a nucleon significantly smaller
than $0.5$ fm, at least for the nuclear matter region of the phase diagram.
Modifications to the standard VDW repulsion and attraction terms allow to
improve significantly the value of the nuclear incompressibility factor $K_0$,
bringing it closer to empirical estimates. The generalization to include the
baryon-baryon interactions into the hadron resonance gas model is performed.
The behavior of the baryon-related lattice QCD observables at zero chemical
potential is shown to be strongly correlated to the nuclear matter properties:
an improved description of the nuclear incompressibility also yields an
improved description of the lattice data at $\mu = 0$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Born Again Neural Networks | Knowledge distillation (KD) consists of transferring knowledge from one
machine learning model (the teacher}) to another (the student). Commonly, the
teacher is a high-capacity model with formidable performance, while the student
is more compact. By transferring knowledge, one hopes to benefit from the
student's compactness. %we desire a compact model with performance close to the
teacher's. We study KD from a new perspective: rather than compressing models,
we train students parameterized identically to their teachers. Surprisingly,
these {Born-Again Networks (BANs), outperform their teachers significantly,
both on computer vision and language modeling tasks. Our experiments with BANs
based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10
(3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional
experiments explore two distillation objectives: (i) Confidence-Weighted by
Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP).
Both methods elucidate the essential components of KD, demonstrating a role of
the teacher outputs on both predicted and non-predicted classes. We present
experiments with students of various capacities, focusing on the under-explored
case where students overpower teachers. Our experiments show significant
advantages from transferring knowledge between DenseNets and ResNets in either
direction.
| 0 | 0 | 0 | 1 | 0 | 0 |
Exploit Kits: The production line of the Cybercrime Economy | The annual cost of Cybercrime to the global economy is estimated to be around
400 billion dollar in support of which Exploit Kits have been providing
enabling technology.This paper reviews the recent developments in Exploit Kit
capability and how these are being applied in practice.In doing so it paves the
way for better understanding of the exploit kits economy that may better help
in combatting them and considers industry preparedness to respond.
| 1 | 0 | 0 | 0 | 0 | 0 |
Helicity locking in light emitted from a plasmonic nanotaper | Surface plasmon waves carry an intrinsic transverse spin, which is locked to
its propagation direction. Apparently, when a singular plasmonic mode is guided
on a conic surface this spin-locking may lead to a strong circular polarization
of the far-field emission. Specifically, an adiabatically tapered gold nanocone
guides an a priori excited plasmonic vortex upwards where the mode accelerates
and finally beams out from the tip apex. The helicity of this beam is shown to
be single-handed and stems solely from the transverse spin-locking of the
helical plasmonic wave-front. We present a simple geometric model that fully
predicts the emerging light spin in our system. Finally we experimentally
demonstrate the helicity-locking phenomenon by using accurately fabricated
nanostructures and confirm the results with the model and numerical data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Declarative Statistics | In this work we introduce declarative statistics, a suite of declarative
modelling tools for statistical analysis. Statistical constraints represent the
key building block of declarative statistics. First, we introduce a range of
relevant counting and matrix constraints and associated decompositions, some of
which novel, that are instrumental in the design of statistical constraints.
Second, we introduce a selection of novel statistical constraints and
associated decompositions, which constitute a self-contained toolbox that can
be used to tackle a wide range of problems typically encountered by
statisticians. Finally, we deploy these statistical constraints to a wide range
of application areas drawn from classical statistics and we contrast our
framework against established practices.
| 1 | 0 | 0 | 1 | 0 | 0 |
Description of CRESST-II data | In Phase 2 of CRESST-II 18 detector modules were operated for about two years
(July 2013 - August 2015). Together with this document we are publishing data
from two detector modules which have been used for direct dark-matter searches.
With these data-sets we were able to set world-leading limits on the cross
section for spin-independent elastic scattering of dark matter particles off
nuclei. We publish the energies of all events within the acceptance regions for
dark-matter searches. In addition, we also publish the energies of the events
within the electron-recoil band. This data set can be used to study
interactions with electrons of CaWO$_4$. In this document we describe how to
use these data sets. In particular, we explain the cut-survival probabilities
required for comparisons of models with the data sets.
| 0 | 1 | 0 | 0 | 0 | 0 |
ABC of ladder operators for rationally extended quantum harmonic oscillator systems | The problem of construction of ladder operators for rationally extended
quantum harmonic oscillator (REQHO) systems of a general form is investigated
in the light of existence of different schemes of the Darboux-Crum-Krein-Adler
transformations by which such systems can be generated from the quantum
harmonic oscillator. Any REQHO system is characterized by the number of
separated states in its spectrum, the number of `valence bands' in which the
separated states are organized, and by the total number of the missing energy
levels and their position. All these peculiarities of a REQHO system are shown
to be detected and reflected by a trinity $(\mathcal{A}^\pm$,
$\mathcal{B}^\pm$, $\mathcal{C}^\pm$) of the basic (primary) lowering and
raising ladder operators related between themselves by certain algebraic
identities with coefficients polynomially-dependent on the Hamiltonian. We show
that all the secondary, higher-order ladder operators are obtainable by a
composition of the basic ladder operators of the trinity which form the set of
the spectrum-generating operators. Each trinity, in turn, can be constructed
from the intertwining operators of the two complementary minimal schemes of the
Darboux-Crum-Krein-Adler transformations.
| 0 | 1 | 1 | 0 | 0 | 0 |
On permutation-invariance of limit theorems | By a classical principle of probability theory, sufficiently thin
subsequences of general sequences of random variables behave like i.i.d.\
sequences. This observation not only explains the remarkable properties of
lacunary trigonometric series, but also provides a powerful tool in many areas
of analysis, such the theory of orthogonal series and Banach space theory. In
contrast to i.i.d.\ sequences, however, the probabilistic structure of lacunary
sequences is not permutation-invariant and the analytic properties of such
sequences can change after rearrangement. In a previous paper we showed that
permutation-invariance of subsequences of the trigonometric system and related
function systems is connected with Diophantine properties of the index
sequence. In this paper we will study permutation-invariance of subsequences of
general r.v.\ sequences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Superconductivity at 33 - 37 K in $ALn_2$Fe$_4$As$_4$O$_2$ ($A$ = K and Cs; $Ln$ = Lanthanides) | We have synthesized 10 new iron oxyarsenides, K$Ln_2$Fe$_4$As$_4$O$_2$ ($Ln$
= Gd, Tb, Dy, and Ho) and Cs$Ln_2$Fe$_4$As$_4$O$_2$ ($Ln$ = Nd, Sm, Gd, Tb, Dy,
and Ho), with the aid of lattice-match [between $A$Fe$_2$As$_2$ ($A$ = K and
Cs) and $Ln$FeAsO] approach. The resultant compounds possess hole-doped
conducting double FeAs layers, [$A$Fe$_4$As$_4$]$^{2-}$, that are separated by
the insulating [$Ln_2$O$_2$]$^{2+}$ slabs. Measurements of electrical
resistivity and dc magnetic susceptibility demonstrate bulk superconductivity
at $T_\mathrm{c}$ = 33 - 37 K. We find that $T_\mathrm{c}$ correlates with the
axis ratio $c/a$ for all 12442-type superconductors discovered. Also,
$T_\mathrm{c}$ tends to increase with the lattice mismatch, implying a role of
lattice instability for the enhancement of superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Live Visualization of GUI Application Code Coverage with GUITracer | The present paper introduces the initial implementation of a software
exploration tool targeting graphical user interface (GUI) driven applications.
GUITracer facilitates the comprehension of GUI-driven applications by starting
from their most conspicuous artefact - the user interface itself. The current
implementation of the tool can be used with any Java-based target application
that employs one of the AWT, Swing or SWT toolkits. The tool transparently
instruments the target application and provides real time information about the
GUI events fired. For each event, call relations within the application are
displayed at method, class or package level, together with detailed coverage
information. The tool facilitates feature location, program comprehension as
well as GUI test creation by revealing the link between the application's GUI
and its underlying code. As such, GUITracer is intended for software
practitioners developing or maintaining GUI-driven applications. We believe our
tool to be especially useful for entry-level practitioners as well as students
seeking to understand complex GUI-driven software systems. The present paper
details the rationale as well as the technical implementation of the tool. As a
proof-of-concept implementation, we also discuss further development that can
lead to our tool's integration into a software development workflow.
| 1 | 0 | 0 | 0 | 0 | 0 |
3D Simulation of Electron and Ion Transmission of GEM-based Detectors | Time Projection Chamber (TPC) has been chosen as the main tracking system in
several high-flux and high repetition rate experiments. These include on-going
experiments such as ALICE and future experiments such as PANDA at FAIR and ILC.
Different $\mathrm{R}\&\mathrm{D}$ activities were carried out on the adoption
of Gas Electron Multiplier (GEM) as the gas amplification stage of the
ALICE-TPC upgrade version. The requirement of low ion feedback has been
established through these activities. Low ion feedback minimizes distortions
due to space charge and maintains the necessary values of detector gain and
energy resolution. In the present work, Garfield simulation framework has been
used to study the related physical processes occurring within single, triple
and quadruple GEM detectors. Ion backflow and electron transmission of
quadruple GEMs, made up of foils with different hole pitch under different
electromagnetic field configurations (the projected solutions for the ALICE
TPC) have been studied. Finally a new triple GEM detector configuration with
low ion backflow fraction and good electron transmission properties has been
proposed as a simpler GEM-based alternative suitable for TPCs for future
collider experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Witt and Cohomological Invariants of Witt Classes | We classify all invariants of the functor $I^n$ (powers of the fundamental
ideal of the Witt ring) with values in $A$, that it to say functions
$I^n(K)\rightarrow A(K)$ compatible with field extensions, in the cases where
$A(K)=W(K)$ is the Witt ring and $A(K)=H^*(K,\mu_2)$ is mod 2 Galois
cohomology. This is done in terms of some invariants $f_n^d$ that behave like
divided powers with respect to sums of Pfister forms, and we show that any
invariant of $I^n$ can be written uniquely as a (possibly infinite) combination
of those $f_n^d$. This in particular allows to lift operations defined on mod 2
Milnor K-theory (or equivalently mod 2 Galois cohomology) to the level of
$I^n$. We also study various properties of these invariants, including
behaviour under products, similitudes, residues for discrete valuations, and
restriction from $I^n$ to $I^{n+1}$. The goal is to use this to study
invariants of algebras with involutions in future articles.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Cooperative Output Regulation Problem of Discrete-Time Linear Multi-Agent Systems by the Adaptive Distributed Observer | In this paper, we first present an adaptive distributed observer for a
discrete-time leader system. This adaptive distributed observer will provide,
to each follower, not only the estimation of the leader's signal, but also the
estimation of the leader's system matrix. Then, based on the estimation of the
matrix S, we devise a discrete adaptive algorithm to calculate the solution to
the regulator equations associated with each follower, and obtain an estimated
feedforward control gain. Finally, we solve the cooperative output regulation
problem for discrete-time linear multi-agent systems by both state feedback and
output feedback adaptive distributed control laws utilizing the adaptive
distributed observer.
| 0 | 0 | 1 | 0 | 0 | 0 |
Preferred traces on C*-algebras of self-similar groupoids arising as fixed points | Recent results of Laca, Raeburn, Ramagge and Whittaker show that any
self-similar action of a groupoid on a graph determines a 1-parameter family of
self-mappings of the trace space of the groupoid C*-algebra. We investigate the
fixed points for these self-mappings, under the same hypotheses that Laca et
al. used to prove that the C*-algebra of the self-similar action admits a
unique KMS state. We prove that for any value of the parameter, the associated
self-mapping admits a unique fixed point, which is in fact a universal
attractor. This fixed point is precisely the trace that extends to a KMS state
on the C*-algebra of the self-similar action.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tool Breakage Detection using Deep Learning | In manufacture, steel and other metals are mainly cut and shaped during the
fabrication process by computer numerical control (CNC) machines. To keep high
productivity and efficiency of the fabrication process, engineers need to
monitor the real-time process of CNC machines, and the lifetime management of
machine tools. In a real manufacturing process, breakage of machine tools
usually happens without any indication, this problem seriously affects the
fabrication process for many years. Previous studies suggested many different
approaches for monitoring and detecting the breakage of machine tools. However,
there still exists a big gap between academic experiments and the complex real
fabrication processes such as the high demands of real-time detections, the
difficulty in data acquisition and transmission. In this work, we use the
spindle current approach to detect the breakage of machine tools, which has the
high performance of real-time monitoring, low cost, and easy to install. We
analyze the features of the current of a milling machine spindle through tools
wearing processes, and then we predict the status of tool breakage by a
convolutional neural network(CNN). In addition, we use a BP neural network to
understand the reliability of the CNN. The results show that our CNN approach
can detect tool breakage with an accuracy of 93%, while the best performance of
BP is 80%.
| 0 | 0 | 0 | 1 | 0 | 0 |
Continuous Learning in Single-Incremental-Task Scenarios | It was recently shown that architectural, regularization and rehearsal
strategies can be used to train deep models sequentially on a number of
disjoint tasks without forgetting previously acquired knowledge. However, these
strategies are still unsatisfactory if the tasks are not disjoint but
constitute a single incremental task (e.g., class-incremental learning). In
this paper we point out the differences between multi-task and
single-incremental-task scenarios and show that well-known approaches such as
LWF, EWC and SI are not ideal for incremental task scenarios. A new approach,
denoted as AR1, combining architectural and regularization strategies is then
specifically proposed. AR1 overhead (in term of memory and computation) is very
small thus making it suitable for online learning. When tested on CORe50 and
iCIFAR-100, AR1 outperformed existing regularization strategies by a good
margin.
| 0 | 0 | 0 | 1 | 0 | 0 |
Diagonal Rescaling For Neural Networks | We define a second-order neural network stochastic gradient training
algorithm whose block-diagonal structure effectively amounts to normalizing the
unit activations. Investigating why this algorithm lacks in robustness then
reveals two interesting insights. The first insight suggests a new way to scale
the stepsizes, clarifying popular algorithms such as RMSProp as well as old
neural network tricks such as fanin stepsize scaling. The second insight
stresses the practical importance of dealing with fast changes of the curvature
of the cost.
| 1 | 0 | 0 | 1 | 0 | 0 |
Dynamic Bernoulli Embeddings for Language Evolution | Word embeddings are a powerful approach for unsupervised analysis of
language. Recently, Rudolph et al. (2016) developed exponential family
embeddings, which cast word embeddings in a probabilistic framework. Here, we
develop dynamic embeddings, building on exponential family embeddings to
capture how the meanings of words change over time. We use dynamic embeddings
to analyze three large collections of historical texts: the U.S. Senate
speeches from 1858 to 2009, the history of computer science ACM abstracts from
1951 to 2014, and machine learning papers on the Arxiv from 2007 to 2015. We
find dynamic embeddings provide better fits than classical embeddings and
capture interesting patterns about how language changes.
| 1 | 0 | 0 | 1 | 0 | 0 |
Homotopy Decompositions of Gauge Groups over Real Surfaces | We analyse the homotopy types of gauge groups of principal U(n)-bundles
associated to pseudo Real vector bundles in the sense of Atiyah. We provide
satisfactory homotopy decompositions of these gauge groups into factors in
which the homotopy groups are well known. Therefore, we substantially build
upon the low dimensional homotopy groups as provided in a paper by I. Biswas,
J. Huisman, and J. Hurtubise.
| 0 | 0 | 1 | 0 | 0 | 0 |
W-algebras associated to surfaces | We define an integral form of the deformed W-algebra of type gl_r, and
construct its action on the K-theory groups of moduli spaces of rank r stable
sheaves on a smooth projective surface S, under certain assumptions. Our
construction generalizes the action studied by Nakajima, Grojnowski and
Baranovsky in cohomology, although the appearance of deformed W-algebras by
generators and relations is a new feature. Physically, this action encodes the
AGT correspondence for 5d supersymmetric gauge theory on S x circle.
| 0 | 0 | 1 | 0 | 0 | 0 |
Comparing Classical and Relativistic Kinematics in First-Order Logic | The aim of this paper is to present a new logic-based understanding of the
connection between classical kinematics and relativistic kinematics. We show
that the axioms of special relativity can be interpreted in the language of
classical kinematics. This means that there is a logical translation function
from the language of special relativity to the language of classical kinematics
which translates the axioms of special relativity into consequences of
classical kinematics. We will also show that if we distinguish a class of
observers (representing observers stationary with respect to the "Ether") in
special relativity and exclude the non-slower-than light observers from
classical kinematics by an extra axiom, then the two theories become
definitionally equivalent (i.e., they become equivalent theories in the sense
as the theory of lattices as algebraic structures is the same as the theory of
lattices as partially ordered sets). Furthermore, we show that classical
kinematics is definitionally equivalent to classical kinematics with only
slower-than-light inertial observers, and hence by transitivity of definitional
equivalence that special relativity theory extended with "Ether" is
definitionally equivalent to classical kinematics. So within an axiomatic
framework of mathematical logic, we explicitly show that the transition from
classical kinematics to relativistic kinematics is the knowledge acquisition
that there is no "Ether", accompanied by a redefinition of the concepts of time
and space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Anomalous Acoustic Plasmon Mode from Topologically Protected States | Plasmons, the collective excitations of electrons in the bulk or at the
surface, play an important role in the properties of materials, and have
generated the field of Plasmonics. We report the observation of a highly
unusual acoustic plasmon mode on the surface of a three-dimensional topological
insulator (TI), Bi2Se3, using momentum resolved inelastic electron scattering.
In sharp contrast to ordinary plasmon modes, this mode exhibits almost linear
dispersion into the second Brillouin zone and remains prominent with remarkably
weak damping not seen in any other systems. This behavior must be associated
with the inherent robustness of the electrons in the TI surface state, so that
not only the surface Dirac states but also their collective excitations are
topologically protected. On the other hand, this mode has much smaller energy
dispersion than expected from a continuous media excitation picture, which can
be attributed to the strong coupling with surface phonons.
| 0 | 1 | 0 | 0 | 0 | 0 |
Klein-Gordonization: mapping superintegrable quantum mechanics to resonant spacetimes | We describe a procedure naturally associating relativistic Klein-Gordon
equations in static curved spacetimes to non-relativistic quantum motion on
curved spaces in the presence of a potential. Our procedure is particularly
attractive in application to (typically, superintegrable) problems whose energy
spectrum is given by a quadratic function of the energy level number, since for
such systems the spacetimes one obtains possess evenly spaced, resonant spectra
of frequencies for scalar fields of a certain mass. This construction emerges
as a generalization of the previously studied correspondence between the Higgs
oscillator and Anti-de Sitter spacetime, which has been useful for both
understanding weakly nonlinear dynamics in Anti-de Sitter spacetime and
algebras of conserved quantities of the Higgs oscillator. Our conversion
procedure ("Klein-Gordonization") reduces to a nonlinear elliptic equation
closely reminiscent of the one emerging in relation to the celebrated Yamabe
problem of differential geometry. As an illustration, we explicitly demonstrate
how to apply this procedure to superintegrable Rosochatius systems, resulting
in a large family of spacetimes with resonant spectra for massless wave
equations.
| 0 | 1 | 1 | 0 | 0 | 0 |
Towards Gene Expression Convolutions using Gene Interaction Graphs | We study the challenges of applying deep learning to gene expression data. We
find experimentally that there exists non-linear signal in the data, however is
it not discovered automatically given the noise and low numbers of samples used
in most research. We discuss how gene interaction graphs (same pathway,
protein-protein, co-expression, or research paper text association) can be used
to impose a bias on a deep model similar to the spatial bias imposed by
convolutions on an image. We explore the usage of Graph Convolutional Neural
Networks coupled with dropout and gene embeddings to utilize the graph
information. We find this approach provides an advantage for particular tasks
in a low data regime but is very dependent on the quality of the graph used. We
conclude that more work should be done in this direction. We design experiments
that show why existing methods fail to capture signal that is present in the
data when features are added which clearly isolates the problem that needs to
be addressed.
| 0 | 0 | 0 | 1 | 1 | 0 |
Density-Functional Theory Study of the Optoelectronic Properties of π-Conjugated Copolymers for Organic Light-Emitting Diodes | Novel low-band-gap copolymer oligomers are proposed on the basis of density
functional theory (DFT) quantum chemical calculations of photophysical
properties. These molecules have an electron donor-accepter (D-A) architecture
involving poly(3-hexylthiophene-2,5-diyl) (P3HT) as D units and furan, aniline,
or hydroquinone as A units. Structural parameters, electronic properties,
highest occupied molecular orbital (HOMO)-lowest unoccupied molecular orbital
(LUMO) gaps and molecular orbital densities are predicted. The charge transfer
process between the D unit and the A unit one is supported by analyzing the
optical absorption spectra of the compounds and the localization of the HOMO
and LUMO.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accurate ranking of influential spreaders in networks based on dynamically asymmetric link-impact | We propose an efficient and accurate measure for ranking spreaders and
identifying the influential ones in spreading processes in networks. While the
edges determine the connections among the nodes, their specific role in
spreading should be considered explicitly. An edge connecting nodes i and j may
differ in its importance for spreading from i to j and from j to i. The key
issue is whether node j, after infected by i through the edge, would reach out
to other nodes that i itself could not reach directly. It becomes necessary to
invoke two unequal weights wij and wji characterizing the importance of an edge
according to the neighborhoods of nodes i and j. The total asymmetric
directional weights originating from a node leads to a novel measure si which
quantifies the impact of the node in spreading processes. A s-shell
decomposition scheme further assigns a s-shell index or weighted coreness to
the nodes. The effectiveness and accuracy of rankings based on si and the
weighted coreness are demonstrated by applying them to nine real-world
networks. Results show that they generally outperform rankings based on the
nodes' degree and k-shell index, while maintaining a low computational
complexity. Our work represents a crucial step towards understanding and
controlling the spread of diseases, rumors, information, trends, and
innovations in networks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning | In this paper we study how to learn stochastic, multimodal transition
dynamics in reinforcement learning (RL) tasks. We focus on evaluating
transition function estimation, while we defer planning over this model to
future work. Stochasticity is a fundamental property of many task environments.
However, discriminative function approximators have difficulty estimating
multimodal stochasticity. In contrast, deep generative models do capture
complex high-dimensional outcome distributions. First we discuss why, amongst
such models, conditional variational inference (VI) is theoretically most
appealing for model-based RL. Subsequently, we compare different VI models on
their ability to learn complex stochasticity on simulated functions, as well as
on a typical RL gridworld with multimodal dynamics. Results show VI
successfully predicts multimodal outcomes, but also robustly ignores these for
deterministic parts of the transition dynamics. In summary, we show a robust
method to learn multimodal transitions using function approximation, which is a
key preliminary for model-based RL in stochastic domains.
| 1 | 0 | 0 | 1 | 0 | 0 |
Publication Trends in Physics Education: A Bibliometric study | A publication trend in Physics Education by employing bibliometric analysis
leads the researchers to describe current scientific movement. This paper tries
to answer "What do Physics education scientists concentrate in their
publications?" by analyzing the productivity and development of publications on
the subject category of Physics Education in the period 1980--2013. The Web of
Science databases in the research areas of "EDUCATION - EDUCATIONAL RESEARCH"
was used to extract the publication trends. The study involves 1360
publications, including 840 articles, 503 proceedings paper, 22 reviews, 7
editorial material, 6 Book review, and one Biographical item. Number of
publications with "Physical Education" in topic increased from 0.14 % (n = 2)
in 1980 to 16.54 % (n = 225) in 2011. Total number of receiving citations is
8071, with approximately citations per papers of 5.93. The results show the
publication and citations in Physic Education has increased dramatically while
the Malaysian share is well ranked.
| 1 | 1 | 0 | 0 | 0 | 0 |
Unveiling Swarm Intelligence with Network Science$-$the Metaphor Explained | Self-organization is a natural phenomenon that emerges in systems with a
large number of interacting components. Self-organized systems show robustness,
scalability, and flexibility, which are essential properties when handling
real-world problems. Swarm intelligence seeks to design nature-inspired
algorithms with a high degree of self-organization. Yet, we do not know why
swarm-based algorithms work well and neither we can compare the different
approaches in the literature. The lack of a common framework capable of
characterizing these several swarm-based algorithms, transcending their
particularities, has led to a stream of publications inspired by different
aspects of nature without much regard as to whether they are similar to already
existing approaches. We address this gap by introducing a network-based
framework$-$the interaction network$-$to examine computational swarm-based
systems via the optics of social dynamics. We discuss the social dimension of
several swarm classes and provide a case study of the Particle Swarm
Optimization. The interaction network enables a better understanding of the
plethora of approaches currently available by looking at them from a general
perspective focusing on the structure of the social interactions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sphere geometry and invariants | A finite abstract simplicial complex G defines two finite simple graphs: the
Barycentric refinement G1, connecting two simplices if one is a subset of the
other and the connection graph G', connecting two simplices if they intersect.
We prove that the Poincare-Hopf value i(x)=1-X(S(x)), where X is Euler
characteristics and S(x) is the unit sphere of a vertex x in G1, agrees with
the Green function value g(x,x),the diagonal element of the inverse of (1+A'),
where A' is the adjacency matrix of G'. By unimodularity, det(1+A') is the
product of parities (-1)^dim(x) of simplices in G, the Fredholm matrix 1+A' is
in GL(n,Z), where n is the number of simplices in G. We show that the set of
possible unit sphere topologies in G1 is a combinatorial invariant of the
complex G. So, also the Green function range of G is a combinatorial invariant.
To prove the invariance of the unit sphere topology we use that all unit
spheres in G1 decompose as a join of a stable and unstable part. The join
operation + renders the category X of simplicial complexes into a monoid, where
the empty complex is the 0 element and the cone construction adds 1. The
augmented Grothendieck group (X,+,0) contains the graph and sphere monoids
(Graphs, +,0) and (Spheres,+,0). The Poincare-Hopf functionals i(G) as well as
the volume are multiplicative functions on (X,+). For the sphere group, both
i(G) as well as Fredholm characteristic are characters. The join + can be
augmented with a product * so that we have a commutative ring (X,+,0,*,1)for
which there are both additive and multiplicative primes and which contains as a
subring of signed complete complexes isomorphic to the integers (Z,+,0,*,1). We
also look at the spectrum of the Laplacian of the join of two graphs. Both for
addition + and multiplication *, one can ask whether unique prime factorization
holds.
| 1 | 0 | 1 | 0 | 0 | 0 |
Chaos and thermalization in small quantum systems | Chaos and ergodicity are the cornerstones of statistical physics and
thermodynamics. While classically even small systems like a particle in a
two-dimensional cavity, can exhibit chaotic behavior and thereby relax to a
microcanonical ensemble, quantum systems formally can not. Recent theoretical
breakthroughs and, in particular, the eigenstate thermalization hypothesis
(ETH) however indicate that quantum systems can also thermalize. In fact ETH
provided us with a framework connecting microscopic models and macroscopic
phenomena, based on the notion of highly entangled quantum states. Such
thermalization was beautifully demonstrated experimentally by A. Kaufman et.
al. who studied relaxation dynamics of a small lattice system of interacting
bosonic particles. By directly measuring the entanglement entropy of
subsystems, as well as other observables, they showed that after the initial
transient time the system locally relaxes to a thermal ensemble while globally
maintaining a zero-entropy pure state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Index Search Algorithms for Databases and Modern CPUs | Over the years, many different indexing techniques and search algorithms have
been proposed, including CSS-trees, CSB+ trees, k-ary binary search, and fast
architecture sensitive tree search. There have also been papers on how best to
set the many different parameters of these index structures, such as the node
size of CSB+ trees.
These indices have been proposed because CPU speeds have been increasing at a
dramatically higher rate than memory speeds, giving rise to the Von Neumann
CPU--Memory bottleneck. To hide the long latencies caused by memory access, it
has become very important to well-utilize the features of modern CPUs. In order
to drive down the average number of CPU clock cycles required to execute CPU
instructions, and thus increase throughput, it has become important to achieve
a good utilization of CPU resources. Some of these are the data and instruction
caches, and the translation lookaside buffers. But it also has become important
to avoid branch misprediction penalties, and utilize vectorization provided by
CPUs in the form of SIMD instructions.
While the layout of index structures has been heavily optimized for the data
cache of modern CPUs, the instruction cache has been neglected so far. In this
paper, we present NitroGen, a framework for utilizing code generation for
speeding up index traversal in main memory database systems. By bringing
together data and code, we make index structures use the dormant resource of
the instruction cache. We show how to combine index compilation with previous
approaches, such as binary tree search, cache-sensitive tree search, and the
architecture-sensitive tree search presented by Kim et al.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bounding the Radius of Convergence of Analytic Functions | Contour integration is a crucial technique in many numeric methods of
interest in physics ranging from differentiation to evaluating functions of
matrices. It is often important to determine whether a given contour contains
any poles or branch cuts, either to make use of these features or to avoid
them. A special case of this problem is that of determining or bounding the
radius of convergence of a function, as this provides a known circle around a
point in which a function remains analytic. We describe a method for
determining whether or not a circular contour of a complex-analytic function
contains any poles. We then build on this to produce a robust method for
bounding the radius of convergence of a complex-analytic function.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Efficient Algorithm for the Multicomponent Compressible Navier-Stokes Equations in Low- and High-Mach Number Regimes | The goal of this study is to develop an efficient numerical algorithm
applicable to a wide range of compressible multicomponent flows. Although many
highly efficient algorithms have been proposed for simulating each type of the
flows, the construction of a universal solver is known to be challenging.
Extreme cases, such as incompressible and highly compressible flows, or
inviscid and highly viscous flows, require different numerical treatments in
order to maintain the efficiency, stability, and accuracy of the method.
Linearized block implicit (LBI) factored schemes are known to provide an
efficient way of solving the compressible Navier-Stokes equations implicitly,
allowing us to avoid stability restrictions at low Mach number and high
viscosity. However, the methods' splitting error has been shown to grow and
dominate physical fluxes as the Mach number goes to zero. In this paper, a
splitting error reduction technique is proposed to solve the issue. A novel
finite element shock-capturing algorithm, proposed by Guermond and Popov, is
reformulated in terms of finite differences, extended to the stiffened gas
equation of state (SG EOS) and combined with the LBI factored scheme to
stabilize the method around flow discontinuities at high Mach numbers. A novel
stabilization term is proposed for low Mach number applications. The resulting
algorithm is shown to be efficient in both low and high Mach number regimes.
The algorithm is extended to the multicomponent case using an interface
capturing strategy with surface tension as a continuous surface force.
Numerical tests are presented to verify the performance and stability
properties for a wide range of flows.
| 0 | 1 | 0 | 0 | 0 | 0 |
Goldstone-like phonon modes in a (111)-strained perovskite | Goldstone modes are massless particles resulting from spontaneous symmetry
breaking. Although such modes are found in elementary particle physics as well
as in condensed matter systems like superfluid helium, superconductors and
magnons - structural Goldstone modes are rare. Epitaxial strain in thin films
can induce structures and properties not accessible in bulk and has been
intensively studied for (001)-oriented perovskite oxides. Here we predict
Goldstone-like phonon modes in (111)-strained SrMnO3 by first-principles
calculations. Under compressive strain the coupling between two in-plane
rotational instabilities give rise to a Mexican hat shaped energy surface
characteristic of a Goldstone mode. Conversely, large tensile strain induces
in-plane polar instabilities with no directional preference, giving rise to a
continuous polar ground state. Such phonon modes with U(1) symmetry could
emulate structural condensed matter Higgs modes. The mass of this Higgs boson,
given by the shape of the Mexican hat energy surface, can be tuned by strain
through proper choice of substrate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Estimating Heterogeneous Causal Effects in the Presence of Irregular Assignment Mechanisms | This paper provides a link between causal inference and machine learning
techniques - specifically, Classification and Regression Trees (CART) - in
observational studies where the receipt of the treatment is not randomized, but
the assignment to the treatment can be assumed to be randomized (irregular
assignment mechanism). The paper contributes to the growing applied machine
learning literature on causal inference, by proposing a modified version of the
Causal Tree (CT) algorithm to draw causal inference from an irregular
assignment mechanism. The proposed method is developed by merging the CT
approach with the instrumental variable framework to causal inference, hence
the name Causal Tree with Instrumental Variable (CT-IV). As compared to CT, the
main strength of CT-IV is that it can deal more efficiently with the
heterogeneity of causal effects, as demonstrated by a series of numerical
results obtained on synthetic data. Then, the proposed algorithm is used to
evaluate a public policy implemented by the Tuscan Regional Administration
(Italy), which aimed at easing the access to credit for small firms. In this
context, CT-IV breaks fresh ground for target-based policies, identifying
interesting heterogeneous causal effects.
| 0 | 0 | 0 | 1 | 0 | 0 |
SUBIC: A Supervised Bi-Clustering Approach for Precision Medicine | Traditional medicine typically applies one-size-fits-all treatment for the
entire patient population whereas precision medicine develops tailored
treatment schemes for different patient subgroups. The fact that some factors
may be more significant for a specific patient subgroup motivates clinicians
and medical researchers to develop new approaches to subgroup detection and
analysis, which is an effective strategy to personalize treatment. In this
study, we propose a novel patient subgroup detection method, called Supervised
Biclustring (SUBIC) using convex optimization and apply our approach to detect
patient subgroups and prioritize risk factors for hypertension (HTN) in a
vulnerable demographic subgroup (African-American). Our approach not only finds
patient subgroups with guidance of a clinically relevant target variable but
also identifies and prioritizes risk factors by pursuing sparsity of the input
variables and encouraging similarity among the input variables and between the
input and target variables
| 1 | 0 | 0 | 1 | 0 | 0 |
Polarization leakage in epoch of reionization windows: III. Wide-field effects of narrow-field arrays | Leakage of polarized Galactic diffuse emission into total intensity can
potentially mimic the 21-cm signal coming from the epoch of reionization (EoR),
as both of them might have fluctuating spectral structure. Although we are
sensitive to the EoR signal only in small fields of view, chromatic sidelobes
from further away can contaminate the inner region. Here, we explore the
effects of leakage into the 'EoR window' of the cylindrically averaged power
spectra (PS) within wide fields of view using both observation and simulation
of the 3C196 and NCP fields, two observing fields of the LOFAR-EoR project. We
present the polarization PS of two one-night observations of the two fields and
find that the NCP field has higher fluctuations along frequency, and
consequently exhibits more power at high-$k_\parallel$ that could potentially
leak to Stokes $I$. Subsequently, we simulate LOFAR observations of Galactic
diffuse polarized emission based on a model to assess what fraction of
polarized power leaks into Stokes $I$ because of the primary beam. We find that
the rms fractional leakage over the instrumental $k$-space is $0.35\%$ in the
3C196 field and $0.27\%$ in the NCP field, and it does not change significantly
within the diameters of $15^\circ$, $9^\circ$ and $4^\circ$. Based on the
observed PS and simulated fractional leakage, we show that a similar level of
leakage into Stokes $I$ is expected in the 3C196 and NCP fields, and the
leakage can be considered to be a bias in the PS.
| 0 | 1 | 0 | 0 | 0 | 0 |
Image Reconstruction using Matched Wavelet Estimated from Data Sensed Compressively using Partial Canonical Identity Matrix | This paper proposes a joint framework wherein lifting-based, separable,
image-matched wavelets are estimated from compressively sensed (CS) images and
used for the reconstruction of the same. Matched wavelet can be easily designed
if full image is available. Also matched wavelet may provide better
reconstruction results in CS application compared to standard wavelet
sparsifying basis. Since in CS application, we have compressively sensed image
instead of full image, existing methods of designing matched wavelet cannot be
used. Thus, we propose a joint framework that estimates matched wavelet from
the compressively sensed images and also reconstructs full images. This paper
has three significant contributions. First, lifting-based, image-matched
separable wavelet is designed from compressively sensed images and is also used
to reconstruct the same. Second, a simple sensing matrix is employed to sample
data at sub-Nyquist rate such that sensing and reconstruction time is reduced
considerably without any noticeable degradation in the reconstruction
performance. Third, a new multi-level L-Pyramid wavelet decomposition strategy
is provided for separable wavelet implementation on images that leads to
improved reconstruction performance. Compared to CS-based reconstruction using
standard wavelets with Gaussian sensing matrix and with existing wavelet
decomposition strategy, the proposed methodology provides faster and better
image reconstruction in compressive sensing application.
| 1 | 0 | 0 | 0 | 0 | 0 |
Python Implementation and Construction of Finite Abelian Groups | Here we present a working framework to establish finite abelian groups in
python. The primary aim is to allow new A-level students to work with examples
of finite abelian groups using open source software. We include the code used
in the implementation of the framework. We also prove some useful results
regarding finite abelian groups which are used to establish the functions and
help show how number theoretic results can blend with computational power when
studying algebra. The groups established are based modular multiplication and
addition. We include direct products of cyclic groups meaning the user has
access to all finite abelian groups.
| 1 | 0 | 1 | 0 | 0 | 0 |
Near-Optimal Closeness Testing of Discrete Histogram Distributions | We investigate the problem of testing the equivalence between two discrete
histograms. A {\em $k$-histogram} over $[n]$ is a probability distribution that
is piecewise constant over some set of $k$ intervals over $[n]$. Histograms
have been extensively studied in computer science and statistics. Given a set
of samples from two $k$-histogram distributions $p, q$ over $[n]$, we want to
distinguish (with high probability) between the cases that $p = q$ and
$\|p-q\|_1 \geq \epsilon$. The main contribution of this paper is a new
algorithm for this testing problem and a nearly matching information-theoretic
lower bound. Specifically, the sample complexity of our algorithm matches our
lower bound up to a logarithmic factor, improving on previous work by
polynomial factors in the relevant parameters. Our algorithmic approach applies
in a more general setting and yields improved sample upper bounds for testing
closeness of other structured distributions as well.
| 1 | 0 | 1 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.