title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks
Advances in deep learning for natural images have prompted a surge of interest in applying similar techniques to medical images. The majority of the initial attempts focused on replacing the input of a deep convolutional neural network with a medical image, which does not take into consideration the fundamental differences between these two types of images. Specifically, fine details are necessary for detection in medical images, unlike in natural images where coarse structures matter most. This difference makes it inadequate to use the existing network architectures developed for natural images, because they work on heavily downscaled images to reduce the memory requirements. This hides details necessary to make accurate predictions. Additionally, a single exam in medical imaging often comes with a set of views which must be fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of high-resolution medical images. We evaluate it on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 886,000 images. We focus on investigating the impact of the training set size and image size on the prediction accuracy. Our results highlight that performance increases with the size of training set, and that the best performance can only be achieved using the original resolution. In the reader study, performed on a random subset of the test set, we confirmed the efficacy of our model, which achieved performance comparable to a committee of radiologists when presented with the same data.
1
0
0
1
0
0
Adaptive posterior contraction rates for the horseshoe
We investigate the frequentist properties of Bayesian procedures for estimation based on the horseshoe prior in the sparse multivariate normal means model. Previous theoretical results assumed that the sparsity level, that is, the number of signals, was known. We drop this assumption and characterize the behavior of the maximum marginal likelihood estimator (MMLE) of a key parameter of the horseshoe prior. We prove that the MMLE is an effective estimator of the sparsity level, in the sense that it leads to (near) minimax optimal estimation of the underlying mean vector generating the data. Besides this empirical Bayes procedure, we consider the hierarchical Bayes method of putting a prior on the unknown sparsity level as well. We show that both Bayesian techniques lead to rate-adaptive optimal posterior contraction, which implies that the horseshoe posterior is a good candidate for generating rate-adaptive credible sets.
0
0
1
1
0
0
A Convex Optimization Approach to Dynamic Programming in Continuous State and Action Spaces
A convex optimization-based method is proposed to numerically solve dynamic programs in continuous state and action spaces. This approach using a discretization of the state space has the following salient features. First, by introducing an auxiliary optimization variable that assigns the contribution of each grid point, it does not require an interpolation in solving an associated Bellman equation and constructing a control policy. Second, the proposed method allows us to solve the Bellman equation with a desired level of precision via convex programming in the case of linear systems and convex costs. We can also construct a control policy of which performance converges to the optimum as the grid resolution becomes finer in this case. Third, when a nonlinear control-affine system is considered, the convex optimization approach provides an approximate control policy with a provable suboptimality bound. Fourth, for general cases, the proposed convex formulation of dynamic programming operators can be simply modified as a nonconvex bi-level program, in which the inner problem is a linear program, without losing convergence properties. From our convex methods and analyses, we observe that convexity in dynamic programming deserves attention as it can play a critical role in obtaining a tractable and convergent numerical solution.
1
0
0
0
0
0
Investigating Enactive Learning for Autonomous Intelligent Agents
The enactive approach to cognition is typically proposed as a viable alternative to traditional cognitive science. Enactive cognition displaces the explanatory focus from the internal representations of the agent to the direct sensorimotor interaction with its environment. In this paper, we investigate enactive learning through means of artificial agent simulations. We compare the performances of the enactive agent to an agent operating on classical reinforcement learning in foraging tasks within maze environments. The characteristics of the agents are analysed in terms of the accessibility of the environmental states, goals, and exploration/exploitation tradeoffs. We confirm that the enactive agent can successfully interact with its environment and learn to avoid unfavourable interactions using intrinsically defined goals. The performance of the enactive agent is shown to be limited by the number of affordable actions.
1
0
0
0
0
0
Computability of semicomputable manifolds in computable topological spaces
We study computable topological spaces and semicomputable and computable sets in these spaces. In particular, we investigate conditions under which semicomputable sets are computable. We prove that a semicomputable compact manifold $M$ is computable if its boundary $\partial M$ is computable. We also show how this result combined with certain construction which compactifies a semicomputable set leads to the conclusion that some noncompact semicomputable manifolds in computable metric spaces are computable.
1
0
1
0
0
0
Acoustic streaming and its suppression in inhomogeneous fluids
We present a theoretical and experimental study of boundary-driven acoustic streaming in an inhomogeneous fluid with variations in density and compressibility. In a homogeneous fluid this streaming results from dissipation in the boundary layers (Rayleigh streaming). We show that in an inhomogeneous fluid, an additional non-dissipative force density acts on the fluid to stabilize particular inhomogeneity configurations, which markedly alters and even suppresses the streaming flows. Our theoretical and numerical analysis of the phenomenon is supported by ultrasound experiments performed with inhomogeneous aqueous iodixanol solutions in a glass-silicon microchip.
0
1
0
0
0
0
Predicting Pulsar Scintillation from Refractive Plasma Sheets
The dynamic and secondary spectra of many pulsars show evidence for long-lived, aligned images of the pulsar that are stationary on a thin scattering sheet. One explanation for this phenomenon considers the effects of wave crests along sheets in the ionized interstellar medium, such as those due to Alfvén waves propagating along current sheets. If these sheets are closely aligned to our line-of-sight to the pulsar, high bending angles arise at the wave crests and a selection effect causes alignment of images produced at different crests, similar to grazing reflection off of a lake. Using geometric optics, we develop a simple parameterized model of these corrugated sheets that can be constrained with a single observation and that makes observable predictions for variations in the scintillation of the pulsar over time and frequency. This model reveals qualitative differences between lensing from overdense and underdense corrugated sheets: Only if the sheet is overdense compared to the surrounding interstellar medium can the lensed images be brighter than the line-of-sight image to the pulsar, and the faint lensed images are closer to the pulsar at higher frequencies if the sheet is underdense, but at lower frequencies if the sheet is overdense.
0
1
0
0
0
0
Disruptive Behavior Disorder (DBD) Rating Scale for Georgian Population
In the presented study Parent/Teacher Disruptive Behavior Disorder (DBD) rating scale based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR [APA, 2000]) which was developed by Pelham and his colleagues (Pelham et al., 1992) was translated and adopted for assessment of childhood behavioral abnormalities, especially ADHD, ODD and CD in Georgian children and adolescents. The DBD rating scale was translated into Georgian language using back translation technique by English language philologists and checked and corrected by qualified psychologists and psychiatrist of Georgia. Children and adolescents in the age range of 6 to 16 years (N 290; Mean Age 10.50, SD=2.88) including 153 males (Mean Age 10.42, SD= 2.62) and 141 females (Mean Age 10.60, SD=3.14) were recruited from different public schools of Tbilisi and the Neurology Department of the Pediatric Clinic of the Tbilisi State Medical University. Participants objectively were assessed via interviewing parents/teachers and qualified psychologists in three different settings including school, home and clinic. In terms of DBD total scores revealed statistically significant differences between healthy controls (M=27.71, SD=17.26) and children and adolescents with ADHD (M=61.51, SD= 22.79). Statistically significant differences were found for inattentive subtype between control (M=8.68, SD=5.68) and ADHD (M=18.15, SD=6.57) groups. In general it was shown that children and adolescents with ADHD had high score on DBD in comparison to typically developed persons. In the study also was determined gender wise prevalence in children and adolescents with ADHD, ODD and CD. The research revealed prevalence of males in comparison with females in all investigated categories.
0
0
0
1
0
0
A GPU Poisson-Fermi Solver for Ion Channel Simulations
The Poisson-Fermi model is an extension of the classical Poisson-Boltzmann model to include the steric and correlation effects of ions and water treated as nonuniform spheres in aqueous solutions. Poisson-Boltzmann electrostatic calculations are essential but computationally very demanding for molecular dynamics or continuum simulations of complex systems in molecular biophysics and electrochemistry. The graphic processing unit (GPU) with enormous arithmetic capability and streaming memory bandwidth is now a powerful engine for scientific as well as industrial computing. We propose two parallel GPU algorithms, one for linear solver and the other for nonlinear solver, for solving the Poisson-Fermi equation approximated by the standard finite difference method in 3D to study biological ion channels with crystallized structures from the Protein Data Bank, for example. Numerical methods for both linear and nonlinear solvers in the parallel algorithms are given in detail to illustrate the salient features of the CUDA (compute unified device architecture) software platform of GPU in implementation. It is shown that the parallel algorithms on GPU over the sequential algorithms on CPU (central processing unit) can achieve 22.8x and 16.9x speedups for the linear solver time and total runtime, respectively.
0
1
0
0
0
0
Quantum Monte Carlo with variable spins: fixed-phase and fixed-node approximations
We study several aspects of the recently introduced fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC) method, in particular, its relation to the fixed-node method and its potential use as a general approach for electronic structure calculations. We illustrate constructions of spinor-based wave functions with the full space-spin symmetry without assigning up or down spin labels to particular electrons, effectively "complexifying" even ordinary real-valued wave functions. Interestingly, with proper choice of the simulation parameters and spin variables, such fixed-phase calculations enable one to reach also the fixed-node limit. The fixed-phase solution provides a straightforward interpretation as the lowest bosonic state in a given effective potential generated by the many-body approximate phase. In addition, the divergences present at real wave function nodes are smoothed out to lower dimensionality, decreasing thus the variation of sampled quantities and making the sampling also more straightforward. We illustrate some of these properties on calculations of selected first-row systems that recover the fixed-node results with quantitatively similar levels of the corresponding biases. At the same time, the fixed-phase approach opens new possibilities for more general trial wave functions with further opportunities for increasing accuracy in practical calculations.
0
1
0
0
0
0
Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise
We introduce coroICA, confounding-robust independent component analysis, a novel ICA algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise stationary confounding. It extends the ordinary ICA model in a theoretically sound and explicit way to incorporate group-wise (or environment-wise) confounding. We show that our general noise model allows to perform ICA in settings where other noisy ICA procedures fail. Additionally, it can be used for applications with grouped data by adjusting for different stationary noise within each group. We show that the noise model has a natural relation to causality and explain how it can be applied in the context of causal inference. In addition to our theoretical framework, we provide an efficient estimation procedure and prove identifiability of the unmixing matrix under mild assumptions. Finally, we illustrate the performance and robustness of our method on simulated data, provide audible and visual examples, and demonstrate the applicability to real-world scenarios by experiments on publicly available Antarctic ice core data as well as two EEG data sets. We provide a scikit-learn compatible pip-installable Python package coroICA as well as R and Matlab implementations accompanied by a documentation at this https URL.
0
0
0
1
1
0
Diversification-Based Learning in Computing and Optimization
Diversification-Based Learning (DBL) derives from a collection of principles and methods introduced in the field of metaheuristics that have broad applications in computing and optimization. We show that the DBL framework goes significantly beyond that of the more recent Opposition-based learning (OBL) framework introduced in Tizhoosh (2005), which has become the focus of numerous research initiatives in machine learning and metaheuristic optimization. We unify and extend earlier proposals in metaheuristic search (Glover, 1997, Glover and Laguna, 1997) to give a collection of approaches that are more flexible and comprehensive than OBL for creating intensification and diversification strategies in metaheuristic search. We also describe potential applications of DBL to various subfields of machine learning and optimization.
1
0
0
0
0
0
A Tutorial on Canonical Correlation Methods
Canonical correlation analysis is a family of multivariate statistical methods for the analysis of paired sets of variables. Since its proposition, canonical correlation analysis has for instance been extended to extract relations between two sets of variables when the sample size is insufficient in relation to the data dimensionality, when the relations have been considered to be non-linear, and when the dimensionality is too large for human interpretation. This tutorial explains the theory of canonical correlation analysis including its regularised, kernel, and sparse variants. Additionally, the deep and Bayesian CCA extensions are briefly reviewed. Together with the numerical examples, this overview provides a coherent compendium on the applicability of the variants of canonical correlation analysis. By bringing together techniques for solving the optimisation problems, evaluating the statistical significance and generalisability of the canonical correlation model, and interpreting the relations, we hope that this article can serve as a hands-on tool for applying canonical correlation methods in data analysis.
1
0
0
1
0
0
Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces
The field of brain-computer interfaces is poised to advance from the traditional goal of controlling prosthetic devices using brain signals to combining neural decoding and encoding within a single neuroprosthetic device. Such a device acts as a "co-processor" for the brain, with applications ranging from inducing Hebbian plasticity for rehabilitation after brain injury to reanimating paralyzed limbs and enhancing memory. We review recent progress in simultaneous decoding and encoding for closed-loop control and plasticity induction. To address the challenge of multi-channel decoding and encoding, we introduce a unifying framework for developing brain co-processors based on artificial neural networks and deep learning. These "neural co-processors" can be used to jointly optimize cost functions with the nervous system to achieve desired behaviors ranging from targeted neuro-rehabilitation to augmentation of brain function.
0
0
0
0
1
0
Enhanced conservation properties of Vlasov codes through coupling with conservative fluid models
Many phenomena in collisionless plasma physics require a kinetic description. The evolution of the phase space density can be modeled by means of the Vlasov equation, which has to be solved numerically in most of the relevant cases. One of the problems that often arise in such simulations is the violation of important physical conservation laws. Numerical diffusion in phase space translates into unphysical heating, which can increase the overall energy significantly, depending on the time scale and the plasma regime. In this paper, a general and straightforward way of improving conservation properties of Vlasov schemes is presented that can potentially be applied to a variety of different codes. The basic idea is to use fluid models with good conservation properties for correcting kinetic models. The higher moments that are missing in the fluid models are provided by the kinetic codes, so that both kinetic and fluid codes compensate the weaknesses of each other in a closed feedback loop.
0
1
0
0
0
0
A note on $p^λ$-convex set in a complete Riemannian manifold
In this paper we have generalized the notion of $\lambda$-radial contraction in complete Riemannian manifold and developed the concept of $p^\lambda$-convex function. We have also given a counter example proving the fact that in general $\lambda$-radial contraction of a geodesic is not necessarily a geodesic. We have also deduced some relations between geodesic convex sets and $p^\lambda$-convex sets and showed that under certain conditions they are equivalent.
0
0
1
0
0
0
Bounded cohomology and virtually free hyperbolically embedded subgroups
Using a probabilistic argument we show that the second bounded cohomology of an acylindrically hyperbolic group $G$ (e.g., a non-elementary hyperbolic or relatively hyperbolic group, non-exceptional mapping class group, ${\rm Out}(F_n)$, \dots) embeds via the natural restriction maps into the inverse limit of the second bounded cohomologies of its virtually free subgroups, and in fact even into the inverse limit of the second bounded cohomologies of its hyperbolically embedded virtually free subgroups. This result is new and non-trivial even in the case where $G$ is a (non-free) hyperbolic group. The corresponding statement fails in general for the third bounded cohomology, even for surface groups.
0
0
1
0
0
0
Transpiling Programmable Computable Functions to Answer Set Programs
Programming Computable Functions (PCF) is a simplified programming language which provides the theoretical basis of modern functional programming languages. Answer set programming (ASP) is a programming paradigm focused on solving search problems. In this paper we provide a translation from PCF to ASP. Using this translation it becomes possible to specify search problems using PCF.
1
0
0
0
0
0
Scattering polarization of the $d$-states of ions and solar magnetic field: Effects of isotropic collisions
Analysis of solar magnetic fields using observations as well as theoretical interpretations of the scattering polarization is commonly designated as a high priority area of the solar research. The interpretation of the observed polarization raises a serious theoretical challenge to the researchers involved in this field. In fact, realistic interpretations need detailed investigations of the depolarizing role of isotropic collisions with neutral hydrogen. The goal of this paper is to determine new relationships which allow the calculation of any collisional rates of the d-levels of ions by simply determining the value of n^* and $E_p$ without the need of determining the interaction potentials and treating the dynamics of collisions. The determination of n^* and E_p is easy and based on atomic data usually available online. Accurate collisional rates allow a reliable diagnostics of solar magnetic fields. In this work we applied our collisional FORTRAN code to a large number of cases involving complex and simple ions. After that, the results are utilized and injected in a genetic programming code developed with C-langugae in order to infer original relationships which will be of great help to solar applications. We discussed the accurarcy of our collisional rates in the cases of polarized complex atoms and atoms with hyperfine structure. The relationships are expressed on the tensorial basis and we explain how to include their contributions in the master equation giving the variation of the density matrix elements. As a test, we compared the results obtained through the general relationships provided in this work with the results obtained directly by running our code of collisions. These comparisons show a percentage of error of about 10% in the average value.
0
1
0
0
0
0
Persistence Codebooks for Topological Data Analysis
Topological data analysis, such as persistent homology has shown beneficial properties for machine learning in many tasks. Topological representations, such as the persistence diagram (PD), however, have a complex structure (multiset of intervals) which makes it difficult to combine with typical machine learning workflows. We present novel compact fixed-size vectorial representations of PDs based on clustering and bag of words encodings that cope well with the inherent sparsity of PDs. Our novel representations outperform state-of-the-art approaches from topological data analysis and are computationally more efficient.
0
0
0
1
0
0
Multivariate Analysis for Computing Maxima in High Dimensions
We study the problem of computing the \textsc{Maxima} of a set of $n$ $d$-dimensional points. For dimensions 2 and 3, there are algorithms to solve the problem with order-oblivious instance-optimal running time. However, in higher dimensions there is still room for improvements. We present an algorithm sensitive to the structural entropy of the input set, which improves the running time, for large classes of instances, on the best solution for \textsc{Maxima} to date for $d \ge 4$.
1
0
0
0
0
0
Generalized multi-Galileons, covariantized new terms, and the no-go theorem for non-singular cosmologies
It has been pointed out that non-singular cosmological solutions in second-order scalar-tensor theories generically suffer from gradient instabilities. We extend this no-go result to second-order gravitational theories with an arbitrary number of interacting scalar fields. Our proof follows directly from the action of generalized multi-Galileons, and thus is different from and complementary to that based on the effective field theory approach. Several new terms for generalized multi-Galileons on a flat background were proposed recently. We find a covariant completion of them and confirm that they do not participate in the no-go argument.
0
1
0
0
0
0
Tonic activation of extrasynaptic NMDA receptors decreases intrinsic excitability and promotes bistability in a model of neuronal activity
NMDA receptors (NMDA-R) typically contribute to excitatory synaptic transmission in the central nervous system. While calcium influx through NMDA-R plays a critical role in synaptic plasticity, indirect experimental evidence also exists demonstrating actions of NMDAR-mediated calcium influx on neuronal excitability through the activation of calcium-activated potassium channels. But, so far, this mechanism has not been studied theoretically. Our theoretical model provide a simple description of neuronal electrical activity including the tonic activity of NMDA receptors and a cytosolic calcium compartment. We show that calcium influx through NMDA-R can directly be coupled to activation of calcium-activated potassium channels providing an overall inhibitory effect on neuronal excitability. Furthermore, the presence of tonic NMDA-R activity promotes bistability in electrical activity by dramatically increasing the stimulus interval where both a stable steady state and repetitive firing can exist. This results could provide an intrinsic mechanism for the constitution of memory traces in neuronal circuits. They also shed light on the way by which beta-amyloids can decrease neuronal activity when interfering with NMDA-R in Alzheimer's disease.
0
0
0
0
1
0
Using Minimum Path Cover to Boost Dynamic Programming on DAGs: Co-Linear Chaining Extended
Aligning sequencing reads on graph representations of genomes is an important ingredient of pan-genomics. Such approaches typically find a set of local anchors that indicate plausible matches between substrings of a read to subpaths of the graph. These anchor matches are then combined to form a (semi-local) alignment of the complete read on a subpath. Co-linear chaining is an algorithmically rigorous approach to combine the anchors. It is a well-known approach for the case of two sequences as inputs. Here we extend the approach so that one of the inputs can be a directed acyclic graph (DAGs), e.g. a splicing graph in transcriptomics or a variant graph in pan-genomics. This extension to DAGs turns out to have a tight connection to the minimum path cover problem, asking for a minimum-cardinality set of paths that cover all the nodes of a DAG. We study the case when the size $k$ of a minimum path cover is small, which is often the case in practice. First, we propose an algorithm for finding a minimum path cover of a DAG $(V,E)$ in $O(k|E|\log|V|)$ time, improving all known time-bounds when $k$ is small and the DAG is not too dense. Second, we introduce a general technique for extending dynamic programming (DP) algorithms from sequences to DAGs. This is enabled by our minimum path cover algorithm, and works by mimicking the DP algorithm for sequences on each path of the minimum path cover. This technique generally produces algorithms that are slower than their counterparts on sequences only by a factor $k$. Our technique can be applied, for example, to the classical longest increasing subsequence and longest common subsequence problems, extended to labeled DAGs. Finally, we apply this technique to the co-linear chaining problem. We also implemented the new co-linear chaining approach. Experiments on splicing graphs show that the new method is efficient also in practice.
1
0
0
0
0
0
Visual Integration of Data and Model Space in Ensemble Learning
Ensembles of classifier models typically deliver superior performance and can outperform single classifier models given a dataset and classification task at hand. However, the gain in performance comes together with the lack in comprehensibility, posing a challenge to understand how each model affects the classification outputs and where the errors come from. We propose a tight visual integration of the data and the model space for exploring and combining classifier models. We introduce a workflow that builds upon the visual integration and enables the effective exploration of classification outputs and models. We then present a use case in which we start with an ensemble automatically selected by a standard ensemble selection algorithm, and show how we can manipulate models and alternative combinations.
1
0
0
1
0
0
Steady States of Rotating Stars and Galaxies
A rotating continuum of particles attracted to each other by gravity may be modeled by the Euler-Poisson system. The existence of solutions is a very classical problem. Here it is proven that a curve of solutions exists, parametrized by the rotation speed, with a fixed mass independent of the speed. The rotation is allowed to vary with the distance to the axis. A special case is when the equation of state is $p=\rho^\gamma,\ 6/5<\gamma<2,\ \gamma\ne4/3$, in contrast to previous variational methods which have required $4/3 < \gamma$. The continuum of particles may alternatively be modeled microscopically by the Vlasov-Poisson system. The kinetic density is a prescribed function. We prove an analogous theorem asserting the existence of a curve of solutions with constant mass. In this model the whole range $(6/5,2)$ is allowed, including $\gamma=4/3$.
0
0
1
0
0
0
Critical properties of the contact process with quenched dilution
We have studied the critical properties of the contact process on a square lattice with quenched site dilution by Monte Carlo simulations. This was achieved by generating in advance the percolating cluster, through the use of an appropriate epidemic model, and then by the simulation of the contact process on the top of the percolating cluster. The dynamic critical exponents were calculated by assuming an activated scaling relation and the static exponents by the usual power law behavior. Our results are in agreement with the prediction that the quenched diluted contact process belongs to the universality class of the random transverse-field Ising model. We have also analyzed the model and determined the phase diagram by the use of a mean-field theory that takes into account the correlation between neighboring sites.
0
1
0
0
0
0
A Unified Strouhal-Reynolds Number Relationship for Laminar Vortex Streets Generated by Different Shaped Obstacles
A new Strouhal-Reynolds number relationship, $St=1/(A+B/Re)$, has been recently proposed based on observations of laminar vortex shedding from circular cylinders in a flowing soap film. Since the new $St$-$Re$ relation was derived from a general physical consideration, it raises the possibility that it may be applicable to vortex shedding from bodies other than circular ones. The work presented herein provides experimental evidence that this is the case. Our measurements also show that in the asymptotic limit ($Re\rightarrow\infty$), $St_{\infty}=1/A\simeq0.21$ is constant independent of rod shapes, leaving $B$ the only parameter that is shape dependent.
0
1
0
0
0
0
On Data-Dependent Random Features for Improved Generalization in Supervised Learning
The randomized-feature approach has been successfully employed in large-scale kernel approximation and supervised learning. The distribution from which the random features are drawn impacts the number of features required to efficiently perform a learning task. Recently, it has been shown that employing data-dependent randomization improves the performance in terms of the required number of random features. In this paper, we are concerned with the randomized-feature approach in supervised learning for good generalizability. We propose the Energy-based Exploration of Random Features (EERF) algorithm based on a data-dependent score function that explores the set of possible features and exploits the promising regions. We prove that the proposed score function with high probability recovers the spectrum of the best fit within the model class. Our empirical results on several benchmark datasets further verify that our method requires smaller number of random features to achieve a certain generalization error compared to the state-of-the-art while introducing negligible pre-processing overhead. EERF can be implemented in a few lines of code and requires no additional tuning parameters.
1
0
0
1
0
0
Anomalous electron spectrum and its relation to peak structure of electron scattering rate in cuprate superconductors
The recent discovery of a direct link between the sharp peak in the electron quasiparticle scattering rate of cuprate superconductors and the well-known peak-dip-hump structure in the electron quasiparticle excitation spectrum is calling for an explanation. Within the framework of the kinetic-energy driven superconducting mechanism, the complicated line-shape in the electron quasiparticle excitation spectrum of cuprate superconductors is investigated. It is shown that the interaction between electrons by the exchange of spin excitations generates a notable peak structure in the electron quasiparticle scattering rate around the antinodal and nodal regions. However, this peak structure disappears at the hot spots, which leads to that the striking peak-dip-hump structure is developed around the antinodal and nodal regions, and vanishes at the hot spots. The theory also confirms that the sharp peak observed in the electron quasiparticle scattering rate is directly responsible for the remarkable peak-dip-hump structure in the electron quasiparticle excitation spectrum of cuprate superconductors.
0
1
0
0
0
0
New goodness-of-fit diagnostics for conditional discrete response models
This paper proposes new specification tests for conditional models with discrete responses, which are key to apply efficient maximum likelihood methods, to obtain consistent estimates of partial effects and to get appropriate predictions of the probability of future events. In particular, we test the static and dynamic ordered choice model specifications and can cover infinite support distributions for e.g. count data. The traditional approach for specification testing of discrete response models is based on probability integral transforms of a jittered discrete data which leads to continuous uniform iid series under the true conditional distribution. Then, standard specification testing techniques for continuous variables could be applied to the transformed series, but the extra randomness from jitters affects the power properties of these methods. We investigate in this paper an alternative transformation based only on original discrete data that avoids any randomization. We analyze the asymptotic properties of goodness-of-fit tests based on this new transformation and explore the properties in finite samples of a bootstrap algorithm to approximate the critical values of test statistics which are model and parameter dependent. We show analytically and in simulations that our approach dominates the methods based on randomization in terms of power. We apply the new tests to models of the monetary policy conducted by the Federal Reserve.
0
0
1
1
0
0
Tailoring Heterovalent Interface Formation with Light
Integrating different semiconductor materials into an epitaxial device structure offers additional degrees of freedom to select for optimal material properties in each layer. However, interface between materials with different valences (i.e. III-V, II-VI and IV semiconductors) can be difficult to form with high quality. Using ZnSe/GaAs as a model system, we explore the use of UV illumination during heterovalent interface growth by molecular beam epitaxy as a way to modify the interface properties. We find that UV illumination alters the mixture of chemical bonds at the interface, permitting the formation of Ga-Se bonds that help to passivate the underlying GaAs layer. Illumination also helps to reduce defects in the ZnSe epilayer. These results suggest that moderate UV illumination during growth may be used as a way to improve the optical properties of both the GaAs and ZnSe layers on either side of the interface.
0
1
0
0
0
0
Looking backward: From Euler to Riemann
We survey the main ideas in the early history of the subjects on which Riemann worked and that led to some of his most important discoveries. The subjects discussed include the theory of functions of a complex variable, elliptic and Abelian integrals, the hypergeometric series, the zeta function, topology, differential geometry, integration, and the notion of space. We shall see that among Riemann's predecessors in all these fields, one name occupies a prominent place, this is Leonhard Euler. The final version of this paper will appear in the book \emph{From Riemann to differential geometry and relativity} (L. Ji, A. Papadopoulos and S. Yamada, ed.) Berlin: Springer, 2017.
0
0
1
0
0
0
Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications Using HyperMapper
In this paper we investigate an emerging application, 3D scene understanding, likely to be significant in the mobile space in the near future. The goal of this exploration is to reduce execution time while meeting our quality of result objectives. In previous work we showed for the first time that it is possible to map this application to power constrained embedded systems, highlighting that decision choices made at the algorithmic design-level have the most impact. As the algorithmic design space is too large to be exhaustively evaluated, we use a previously introduced multi-objective Random Forest Active Learning prediction framework dubbed HyperMapper, to find good algorithmic designs. We show that HyperMapper generalizes on a recent cutting edge 3D scene understanding algorithm and on a modern GPU-based computer architecture. HyperMapper is able to beat an expert human hand-tuning the algorithmic parameters of the class of Computer Vision applications taken under consideration in this paper automatically. In addition, we use crowd-sourcing using a 3D scene understanding Android app to show that the Pareto front obtained on an embedded system can be used to accelerate the same application on all the 83 smart-phones and tablets crowd-sourced with speedups ranging from 2 to over 12.
1
0
0
0
0
0
A comparison theorem for MW-motivic cohomology
We prove that for a finitely generated field over an infinite perfect field k, and for any integer n, the (n,n)-th MW-motivic cohomology group identifies with the n-th Milnor-Witt K-theory group of that field
0
0
1
0
0
0
Wiener Filtering for Passive Linear Quantum Systems
This paper considers a version of the Wiener filtering problem for equalization of passive quantum linear quantum systems. We demonstrate that taking into consideration the quantum nature of the signals involved leads to features typically not encountered in classical equalization problems. Most significantly, finding a mean-square optimal quantum equalizing filter amounts to solving a nonconvex constrained optimization problem. We discuss two approaches to solving this problem, both involving a relaxation of the constraint. In both cases, unlike classical equalization, there is a threshold on the variance of the noise below which an improvement of the mean-square error cannot be guaranteed.
1
0
0
0
0
0
Robust Navigation In GNSS Degraded Environment Using Graph Optimization
Robust navigation in urban environments has received a considerable amount of both academic and commercial interest over recent years. This is primarily due to large commercial organizations such as Google and Uber stepping into the autonomous navigation market. Most of this research has shied away from Global Navigation Satellite System (GNSS) based navigation. The aversion to utilizing GNSS data is due to the degraded nature of the data in urban environment (e.g., multipath, poor satellite visibility). The degradation of the GNSS data in urban environments makes it such that traditional (GNSS) positioning methods (e.g., extended Kalman filter, particle filters) perform poorly. However, recent advances in robust graph theoretic based sensor fusion methods, primarily applied to Simultaneous Localization and Mapping (SLAM) based robotic applications, can also be applied to GNSS data processing. This paper will utilize one such method known as the factor graph in conjunction several robust optimization techniques to evaluate their applicability to robust GNSS data processing. The goals of this study are two-fold. First, for GNSS applications, we will experimentally evaluate the effectiveness of robust optimization techniques within a graph-theoretic estimation framework. Second, by releasing the software developed and data sets used for this study, we will introduce a new open-source front-end to the Georgia Tech Smoothing and Mapping (GTSAM) library for the purpose of integrating GNSS pseudorange observations.
1
0
0
0
0
0
On reproduction of On the regularization of Wasserstein GANs
This report has several purposes. First, our report is written to investigate the reproducibility of the submitted paper On the regularization of Wasserstein GANs (2018). Second, among the experiments performed in the submitted paper, five aspects were emphasized and reproduced: learning speed, stability, robustness against hyperparameter, estimating the Wasserstein distance, and various sampling method. Finally, we identify which parts of the contribution can be reproduced, and at what cost in terms of resources. All source code for reproduction is open to the public.
1
0
0
1
0
0
Density Estimation with Contaminated Data: Minimax Rates and Theory of Adaptation
This paper studies density estimation under pointwise loss in the setting of contamination model. The goal is to estimate $f(x_0)$ at some $x_0\in\mathbb{R}$ with i.i.d. observations, $$ X_1,\dots,X_n\sim (1-\epsilon)f+\epsilon g, $$ where $g$ stands for a contamination distribution. In the context of multiple testing, this can be interpreted as estimating the null density at a point. We carefully study the effect of contamination on estimation through the following model indices: contamination proportion $\epsilon$, smoothness of target density $\beta_0$, smoothness of contamination density $\beta_1$, and level of contamination $m$ at the point to be estimated, i.e. $g(x_0)\leq m$. It is shown that the minimax rate with respect to the squared error loss is of order $$ [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}], $$ which characterizes the exact influence of contamination on the difficulty of the problem. We then establish the minimal cost of adaptation to contamination proportion, to smoothness and to both of the numbers. It is shown that some small price needs to be paid for adaptation in any of the three cases. Variations of Lepski's method are considered to achieve optimal adaptation. The problem is also studied when there is no smoothness assumption on the contamination distribution. This setting that allows for an arbitrary contamination distribution is recognized as Huber's $\epsilon$-contamination model. The minimax rate is shown to be $$ [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee [\epsilon^{\frac{2\beta_0}{\beta_0+1}}]. $$ The adaptation theory is also different from the smooth contamination case. While adaptation to either contamination proportion or smoothness only costs a logarithmic factor, adaptation to both numbers is proved to be impossible.
0
0
1
1
0
0
A uniform bound on the Brauer groups of certain log K3 surfaces
Let U be the complement of a smooth anticanonical divisor in a del Pezzo surface of degree at most 7 over a number field k. We show that there is an effective uniform bound for the size of the Brauer group of U in terms of the degree of k.
0
0
1
0
0
0
Composable Deep Reinforcement Learning for Robotic Manipulation
Model-free deep reinforcement learning has been shown to exhibit good performance in domains ranging from video games to simulated robotic manipulation and locomotion. However, model-free methods are known to perform poorly when the interaction time with the environment is limited, as is the case for most real-world robotic tasks. In this paper, we study how maximum entropy policies trained using soft Q-learning can be applied to real-world robotic manipulation. The application of this method to real-world manipulation is facilitated by two important features of soft Q-learning. First, soft Q-learning can learn multimodal exploration strategies by learning policies represented by expressive energy-based models. Second, we show that policies learned with soft Q-learning can be composed to create new policies, and that the optimality of the resulting policy can be bounded in terms of the divergence between the composed policies. This compositionality provides an especially valuable tool for real-world manipulation, where constructing new policies by composing existing skills can provide a large gain in efficiency over training from scratch. Our experimental evaluation demonstrates that soft Q-learning is substantially more sample efficient than prior model-free deep reinforcement learning methods, and that compositionality can be performed for both simulated and real-world tasks.
1
0
0
1
0
0
Stimulated Raman Scattering Imposes Fundamental Limits to the Duration and Bandwidth of Temporal Cavity Solitons
Temporal cavity solitons (CS) are optical pulses that can persist in passive resonators, and they play a key role in the generation of coherent microresonator frequency combs. In resonators made of amorphous materials, such as fused silica, they can exhibit a spectral red-shift due to stimulated Raman scattering. Here we show that this Raman-induced self-frequency-shift imposes a fundamental limit on the duration and bandwidth of temporal CSs. Specifically, we theoretically predict that stimulated Raman scattering introduces a previously unidentified Hopf bifurcation that leads to destabilization of CSs at large pump-cavity detunings, limiting the range of detunings over which they can exist. We have confirmed our theoretical predictions by performing extensive experiments in several different synchronously-driven fiber ring resonators, obtaining results in excellent agreement with numerical simulations. Our results could have significant implications for the future design of Kerr frequency comb systems based on amorphous microresonators.
0
1
0
0
0
0
Investigation of Using VAE for i-Vector Speaker Verification
New system for i-vector speaker recognition based on variational autoencoder (VAE) is investigated. VAE is a promising approach for developing accurate deep nonlinear generative models of complex data. Experiments show that VAE provides speaker embedding and can be effectively trained in an unsupervised manner. LLR estimate for VAE is developed. Experiments on NIST SRE 2010 data demonstrate its correctness. Additionally, we show that the performance of VAE-based system in the i-vectors space is close to that of the diagonal PLDA. Several interesting results are also observed in the experiments with $\beta$-VAE. In particular, we found that for $\beta\ll 1$, VAE can be trained to capture the features of complex input data distributions in an effective way, which is hard to obtain in the standard VAE ($\beta=1$).
1
0
0
1
0
0
A Unifying Framework for Convergence Analysis of Approximate Newton Methods
Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, subsampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost in each iteration while commanding a high convergence rate. Other efficient stochastic second order methods are also proposed. However, the convergence properties of these methods are still not well understood. There are also several important gaps between the current convergence theory and the performance in real applications. In this paper, we aim to fill these gaps. We propose a unifying framework to analyze local convergence properties of second order methods. Based on this framework, our theoretical analysis matches the performance in real applications.
1
0
0
0
0
0
A Newman property for BLD-mappings
We define a Newman property for BLD-mappings and study its connections to the porosity of the branch set in the setting of generalized manifolds equipped with complete path metrics.
0
0
1
0
0
0
Learning to Use Learners' Advice
In this paper, we study a variant of the framework of online learning using expert advice with limited/bandit feedback. We consider each expert as a learning entity, seeking to more accurately reflecting certain real-world applications. In our setting, the feedback at any time $t$ is limited in a sense that it is only available to the expert $i^t$ that has been selected by the central algorithm (forecaster), \emph{i.e.}, only the expert $i^t$ receives feedback from the environment and gets to learn at time $t$. We consider a generic black-box approach whereby the forecaster does not control or know the learning dynamics of the experts apart from knowing the following no-regret learning property: the average regret of any expert $j$ vanishes at a rate of at least $O(t_j^{\regretRate-1})$ with $t_j$ learning steps where $\regretRate \in [0, 1]$ is a parameter. In the spirit of competing against the best action in hindsight in multi-armed bandits problem, our goal here is to be competitive w.r.t. the cumulative losses the algorithm could receive by following the policy of always selecting one expert. We prove the following hardness result: without any coordination between the forecaster and the experts, it is impossible to design a forecaster achieving no-regret guarantees. In order to circumvent this hardness result, we consider a practical assumption allowing the forecaster to "guide" the learning process of the experts by filtering/blocking some of the feedbacks observed by them from the environment, \emph{i.e.}, not allowing the selected expert $i^t$ to learn at time $t$ for some time steps. Then, we design a novel no-regret learning algorithm \algo for this problem setting by carefully guiding the feedbacks observed by experts. We prove that \algo achieves the worst-case expected cumulative regret of $O(\Time^\frac{1}{2 - \regretRate})$ after $\Time$ time steps.
1
0
0
0
0
0
Fast and Accurate Time Series Classification with WEASEL
Time series (TS) occur in many scientific and commercial applications, ranging from earth surveillance to industry automation to the smart grids. An important type of TS analysis is classification, which can, for instance, improve energy load forecasting in smart grids by detecting the types of electronic devices based on their energy consumption profiles recorded by automatic sensors. Such sensor-driven applications are very often characterized by (a) very long TS and (b) very large TS datasets needing classification. However, current methods to time series classification (TSC) cannot cope with such data volumes at acceptable accuracy; they are either scalable but offer only inferior classification quality, or they achieve state-of-the-art classification quality but cannot scale to large data volumes. In this paper, we present WEASEL (Word ExtrAction for time SEries cLassification), a novel TSC method which is both scalable and accurate. Like other state-of-the-art TSC methods, WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more accurate than the best current non-ensemble algorithms at orders-of-magnitude lower classification and training times, and it is almost as accurate as ensemble classifiers, whose computational complexity makes them inapplicable even for mid-size datasets. The outstanding robustness of WEASEL is also confirmed by experiments on two real smart grid datasets, where it out-of-the-box achieves almost the same accuracy as highly tuned, domain-specific methods.
1
0
0
1
0
0
Deep Neural Network Architectures for Modulation Classification
In this work, we investigate the value of employing deep learning for the task of wireless signal modulation recognition. Recently in [1], a framework has been introduced by generating a dataset using GNU radio that mimics the imperfections in a real wireless channel, and uses 10 different modulation types. Further, a convolutional neural network (CNN) architecture was developed and shown to deliver performance that exceeds that of expert-based approaches. Here, we follow the framework of [1] and find deep neural network architectures that deliver higher accuracy than the state of the art. We tested the architecture of [1] and found it to achieve an accuracy of approximately 75% of correctly recognizing the modulation type. We first tune the CNN architecture of [1] and find a design with four convolutional layers and two dense layers that gives an accuracy of approximately 83.8% at high SNR. We then develop architectures based on the recently introduced ideas of Residual Networks (ResNet [2]) and Densely Connected Networks (DenseNet [3]) to achieve high SNR accuracies of approximately 83.5% and 86.6%, respectively. Finally, we introduce a Convolutional Long Short-term Deep Neural Network (CLDNN [4]) to achieve an accuracy of approximately 88.5% at high SNR.
1
0
0
1
0
0
Stack Overflow Considered Harmful? The Impact of Copy&Paste on Android Application Security
Online programming discussion platforms such as Stack Overflow serve as a rich source of information for software developers. Available information include vibrant discussions and oftentimes ready-to-use code snippets. Anecdotes report that software developers copy and paste code snippets from those information sources for convenience reasons. Such behavior results in a constant flow of community-provided code snippets into production software. To date, the impact of this behaviour on code security is unknown. We answer this highly important question by quantifying the proliferation of security-related code snippets from Stack Overflow in Android applications available on Google Play. Access to the rich source of information available on Stack Overflow including ready-to-use code snippets provides huge benefits for software developers. However, when it comes to code security there are some caveats to bear in mind: Due to the complex nature of code security, it is very difficult to provide ready-to-use and secure solutions for every problem. Hence, integrating a security-related code snippet from Stack Overflow into production software requires caution and expertise. Unsurprisingly, we observed insecure code snippets being copied into Android applications millions of users install from Google Play every day. To quantitatively evaluate the extent of this observation, we scanned Stack Overflow for code snippets and evaluated their security score using a stochastic gradient descent classifier. In order to identify code reuse in Android applications, we applied state-of-the-art static analysis. Our results are alarming: 15.4% of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9% contain at least one insecure code snippet.
1
0
0
0
0
0
United Nations Digital Blue Helmets as a Starting Point for Cyber Peacekeeping
Prior works, such as the Tallinn manual on the international law applicable to cyber warfare, focus on the circumstances of cyber warfare. Many organizations are considering how to conduct cyber warfare, but few have discussed methods to reduce, or even prevent, cyber conflict. A recent series of publications started developing the framework of Cyber Peacekeeping (CPK) and its legal requirements. These works assessed the current state of organizations such as ITU IMPACT, NATO CCDCOE and Shanghai Cooperation Organization, and found that they did not satisfy requirements to effectively host CPK activities. An assessment of organizations currently working in the areas related to CPK found that the United Nations (UN) has mandates and organizational structures that appear to somewhat overlap the needs of CPK. However, the UN's current approach to Peacekeeping cannot be directly mapped to cyberspace. In this research we analyze the development of traditional Peacekeeping in the United Nations, and current initiatives in cyberspace. Specifically, we will compare the proposed CPK framework with the recent initiative of the United Nations named the 'Digital Blue Helmets' as well as with other projects in the UN which helps to predict and mitigate conflicts. Our goal is to find practical recommendations for the implementation of the CPK framework in the United Nations, and to examine how responsibilities defined in the CPK framework overlap with those of the 'Digital Blue Helmets' and the Global Pulse program.
1
0
0
0
0
0
Motives of derived equivalent K3 surfaces
We observe that derived equivalent K3 surfaces have isomorphic Chow motives.
0
0
1
0
0
0
Robust Dual View Deep Agent
Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom, as a test case. The experiments show that in comparison to single input agents, the proposed architecture succeeds to have the same playing performance and shows more robust behavior, achieving significant reduction in the number of training parameters of almost 30%.
0
0
0
1
0
0
Microplasma generation by slow microwave in an electromagnetically induced transparency-like metasurface
Microplasma generation using microwaves in an electromagnetically induced transparency (EIT)-like metasurface composed of two types of radiatively coupled cut-wire resonators with slightly different resonance frequencies is investigated. Microplasma is generated in either of the gaps of the cut-wire resonators as a result of strong enhancement of the local electric field associated with resonance and slow microwave effect. The threshold microwave power for plasma ignition is found to reach a minimum at the EIT-like transmission peak frequency, where the group index is maximized. A pump-probe measurement of the metasurface reveals that the transmission properties can be significantly varied by varying the properties of the generated microplasma near the EIT-like transmission peak frequency and the resonance frequency. The electron density of the microplasma is roughly estimated to be of order $1\times 10^{10}\,\mathrm{cm}^{-3}$ for a pump power of $15.8\,\mathrm{W}$ by comparing the measured transmission spectrum for the probe wave with the numerically calculated spectrum. In the calculation, we assumed that the plasma is uniformly generated in the resonator gap, that the electron temperature is $2\,\mathrm{eV}$, and that the elastic scattering cross section is $20 \times 10^{-16}\,\mathrm{cm}^2$.
0
1
0
0
0
0
K-theory of group Banach algebras and Banach property RD
We investigate Banach algebras of convolution operators on the $L^p$ spaces of a locally compact group, and their K-theory. We show that for a discrete group, the corresponding K-theory groups depend continuously on $p$ in an inductive sense. Via a Banach version of property RD, we show that for a large class of groups, the K-theory groups of the Banach algebras are independent of $p$.
0
0
1
0
0
0
A Biologically Plausible Supervised Learning Method for Spiking Neural Networks Using the Symmetric STDP Rule
Spiking neural networks (SNNs) possess energy-efficient potential due to event-based computation. However, supervised training of SNNs remains a challenge as spike activities are non-differentiable. Previous SNNs training methods can basically be categorized into two classes, backpropagation-like training methods and plasticity-based learning methods. The former methods are dependent on energy-inefficient real-valued computation and non-local transmission, as also required in artificial neural networks (ANNs), while the latter either be considered biologically implausible or exhibit poor performance. Hence, biologically plausible (bio-plausible) high-performance supervised learning (SL) methods for SNNs remain deficient. In this paper, we proposed a novel bio-plausible SNN model for SL based on the symmetric spike-timing dependent plasticity (sym-STDP) rule found in neuroscience. By combining the sym-STDP rule with bio-plausible synaptic scaling and intrinsic plasticity of the dynamic threshold, our SNN model implemented SL well and achieved good performance in the benchmark recognition task (MNIST). To reveal the underlying mechanism of our SL model, we visualized both layer-based activities and synaptic weights using the t-distributed stochastic neighbor embedding (t-SNE) method after training and found that they were well clustered, thereby demonstrating excellent classification ability. As the learning rules were bio-plausible and based purely on local spike events, our model could be easily applied to neuromorphic hardware for online training and may be helpful for understanding SL information processing at the synaptic level in biological neural systems.
0
0
0
0
1
0
Learning from various labeling strategies for suicide-related messages on social media: An experimental study
Suicide is an important but often misunderstood problem, one that researchers are now seeking to better understand through social media. Due in large part to the fuzzy nature of what constitutes suicidal risks, most supervised approaches for learning to automatically detect suicide-related activity in social media require a great deal of human labor to train. However, humans themselves have diverse or conflicting views on what constitutes suicidal thoughts. So how to obtain reliable gold standard labels is fundamentally challenging and, we hypothesize, depends largely on what is asked of the annotators and what slice of the data they label. We conducted multiple rounds of data labeling and collected annotations from crowdsourcing workers and domain experts. We aggregated the resulting labels in various ways to train a series of supervised models. Our preliminary evaluations show that using unanimously agreed labels from multiple annotators is helpful to achieve robust machine models.
1
0
0
0
0
0
Modeling Information Flow Through Deep Neural Networks
This paper proposes a principled information theoretic analysis of classification for deep neural network structures, e.g. convolutional neural networks (CNN). The output of convolutional filters is modeled as a random variable Y conditioned on the object class C and network filter bank F. The conditional entropy (CENT) H(Y |C,F) is shown in theory and experiments to be a highly compact and class-informative code, that can be computed from the filter outputs throughout an existing CNN and used to obtain higher classification results than the original CNN itself. Experiments demonstrate the effectiveness of CENT feature analysis in two separate CNN classification contexts. 1) In the classification of neurodegeneration due to Alzheimer's disease (AD) and natural aging from 3D magnetic resonance image (MRI) volumes, 3 CENT features result in an AUC=94.6% for whole-brain AD classification, the highest reported accuracy on the public OASIS dataset used and 12% higher than the softmax output of the original CNN trained for the task. 2) In the context of visual object classification from 2D photographs, transfer learning based on a small set of CENT features identified throughout an existing CNN leads to AUC values comparable to the 1000-feature softmax output of the original network when classifying previously unseen object categories. The general information theoretical analysis explains various recent CNN design successes, e.g. densely connected CNN architectures, and provides insights for future research directions in deep learning.
1
0
0
1
0
0
Scalable Twin Neural Networks for Classification of Unbalanced Data
Twin Support Vector Machines (TWSVMs) have emerged an efficient alternative to Support Vector Machines (SVM) for learning from imbalanced datasets. The TWSVM learns two non-parallel classifying hyperplanes by solving a couple of smaller sized problems. However, it is unsuitable for large datasets, as it involves matrix operations. In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets. The Twin NN also learns an optimal feature map, allowing for better discrimination between classes. We also present an extension of this network architecture for multiclass datasets. Results presented in the paper demonstrate that the Twin NN generalizes well and scales well on large unbalanced datasets.
1
0
0
0
0
0
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Recurrent Neural Networks (RNNs) are powerful sequence modeling tools. However, when dealing with high dimensional inputs, the training of RNNs becomes computational expensive due to the large number of model parameters. This hinders RNNs from solving many important computer vision tasks, such as Action Recognition in Videos and Image Captioning. To overcome this problem, we propose a compact and flexible structure, namely Block-Term tensor decomposition, which greatly reduces the parameters of RNNs and improves their training efficiency. Compared with alternative low-rank approximations, such as tensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only more concise (when using the same rank), but also able to attain a better approximation to the original RNNs with much fewer parameters. On three challenging tasks, including Action Recognition in Videos, Image Captioning and Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of both prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes 17,388 times fewer parameters than the standard LSTM to achieve an accuracy improvement over 15.6\% in the Action Recognition task on the UCF11 dataset.
1
0
0
1
0
0
Deep Learning applied to Road Traffic Speed forecasting
In this paper, we propose deep learning architectures (FNN, CNN and LSTM) to forecast a regression model for time dependent data. These algorithm's are designed to handle Floating Car Data (FCD) historic speeds to predict road traffic data. For this we aggregate the speeds into the network inputs in an innovative way. We compare the RMSE thus obtained with the results of a simpler physical model, and show that the latter achieves better RMSE accuracy. We also propose a new indicator, which evaluates the algorithms improvement when compared to a benchmark prediction. We conclude by questioning the interest of using deep learning methods for this specific regression task.
1
0
0
1
0
0
Wait For It: Identifying "On-Hold" Self-Admitted Technical Debt
Self-admitted technical debt refers to situations where a software developer knows that their current implementation is not optimal and indicates this using a source code comment. In this work, we hypothesize that it is possible to develop automated techniques to understand a subset of these comments in more detail, and to propose tool support that can help developers manage self-admitted technical debt more effectively. Based on a qualitative study of 335 comments indicating self-admitted technical debt, we first identify one particular class of debt amenable to automated management: "on-hold" self-admitted technical debt, i.e., debt which contains a condition to indicate that a developer is waiting for a certain event or an updated functionality having been implemented elsewhere. We then design and evaluate an automated classifier which can automatically identify these "on-hold" instances with a precision of 0.81 as well as detect the specific conditions that developers are waiting for. Our work presents a first step towards automated tool support that is able to indicate when certain instances of self-admitted technical debt are ready to be addressed.
1
0
0
0
0
0
Real time observation of granular rock analogue material deformation and failure using nonlinear laser interferometry
A better understanding and anticipation of natural processes such as landsliding or seismic fault activity requires detailed theoretical and experimental analysis of rock mechanics and geomaterial dynamics. These last decades, considerable progress has been made towards understanding deformation and fracture process in laboratory experiment on granular rock materials, as the well-known shear banding experiment. One of the reasons for this progress is the continuous improvement in the instrumental techniques of observation. But the lack of real time methods does not allow the detection of indicators of the upcoming fracture process and thus to anticipate the phenomenon. Here, we have performed uniaxial compression experiments to analyse the response of a granular rock material sample to different shocks. We use a novel interferometric laser sensor based on the nonlinear self-mixing interferometry technique to observe in real time the deformations of the sample and assess its usefulness as a diagnostic tool for the analysis of geomaterial dynamics. Due to the high spatial and temporal resolution of this approach, we observe both vibrations processes in response to a dynamic loading and the onset of failure. The latter is preceded by a continuous variation of vibration period of the material. After several shocks, the material response is no longer reversible and we detect a progressive accumulation of irreversible deformation leading to the fracture process. We demonstrate that material failure is anticipated by the critical slowing down of the surface vibrational motion, which may therefore be envisioned as an early warning signal or predictor to the macroscopic failure of the sample. The nonlinear self-mixing interferometry technique is readily extensible to fault propagation measurements. As such, it opens a new window of observation for the study of geomaterial deformation and failure.
0
1
0
0
0
0
Virtual link and knot invariants from non-abelian Yang-Baxter 2-cocycle pairs
For a given $(X,S,\beta)$, where $S,\beta\colon X\times X\to X\times X$ are set theoretical solutions of Yang-Baxter equation with a compatibility condition, we define an invariant for virtual (or classical) knots/links using non commutative 2-cocycles pairs $(f,g)$ that generalizes the one defined in [FG2]. We also define, a group $U_{nc}^{fg}=U_{nc}^{fg}(X,S,\beta)$ and functions $\pi_f, \pi_g\colon X\times X\to U_{nc}^{fg}(X)$ governing all 2-cocycles in $X$. We exhibit examples of computations achieved using GAP.
0
0
1
0
0
0
An extensible cluster-graph taxonomy for open set sound scene analysis
We present a new extensible and divisible taxonomy for open set sound scene analysis. This new model allows complex scene analysis with tangible descriptors and perception labels. Its novel structure is a cluster graph such that each cluster (or subset) can stand alone for targeted analyses such as office sound event detection, whilst maintaining integrity over the whole graph (superset) of labels. The key design benefit is its extensibility as new labels are needed during new data capture. Furthermore, datasets which use the same taxonomy are easily augmented, saving future data collection effort. We balance the details needed for complex scene analysis with avoiding 'the taxonomy of everything' with our framework to ensure no duplicity in the superset of labels and demonstrate this with DCASE challenge classifications.
1
0
0
0
0
0
CREATE: Multimodal Dataset for Unsupervised Learning, Generative Modeling and Prediction of Sensory Data from a Mobile Robot in Indoor Environments
The CREATE database is composed of 14 hours of multimodal recordings from a mobile robotic platform based on the iRobot Create. The various sensors cover vision, audition, motors and proprioception. The dataset has been designed in the context of a mobile robot that can learn multimodal representations of its environment, thanks to its ability to navigate the environment. This ability can also be used to learn the dependencies and relationships between the different modalities of the robot (e.g. vision, audition), as they reflect both the external environment and the internal state of the robot. The provided multimodal dataset is expected to have multiple usages, such as multimodal unsupervised object learning, multimodal prediction and egomotion/causality detection.
1
0
0
0
0
0
Deep learning enhanced mobile-phone microscopy
Mobile-phones have facilitated the creation of field-portable, cost-effective imaging and sensing technologies that approach laboratory-grade instrument performance. However, the optical imaging interfaces of mobile-phones are not designed for microscopy and produce spatial and spectral distortions in imaging microscopic specimens. Here, we report on the use of deep learning to correct such distortions introduced by mobile-phone-based microscopes, facilitating the production of high-resolution, denoised and colour-corrected images, matching the performance of benchtop microscopes with high-end objective lenses, also extending their limited depth-of-field. After training a convolutional neural network, we successfully imaged various samples, including blood smears, histopathology tissue sections, and parasites, where the recorded images were highly compressed to ease storage and transmission for telemedicine applications. This method is applicable to other low-cost, aberrated imaging systems, and could offer alternatives for costly and bulky microscopes, while also providing a framework for standardization of optical images for clinical and biomedical applications.
1
1
0
0
0
0
On convergence of the sample correlation matrices in high-dimensional data
In this paper, we consider an estimation problem concerning the matrix of correlation coefficients in context of high dimensional data settings. In particular, we revisit some results in Li and Rolsalsky [Li, D. and Rolsalsky, A. (2006). Some strong limit theorems for the largest entries of sample correlation matrices, The Annals of Applied Probability, 16, 1, 423-447]. Four of the main theorems of Li and Rolsalsky (2006) are established in their full generalities and we simplify substantially some proofs of the quoted paper. Further, we generalize a theorem which is useful in deriving the existence of the pth moment as well as in studying the convergence rates in law of large numbers.
0
0
1
1
0
0
Minimal Controllability of Conjunctive Boolean Networks is NP-Complete
Given a conjunctive Boolean network (CBN) with $n$ state-variables, we consider the problem of finding a minimal set of state-variables to directly affect with an input so that the resulting conjunctive Boolean control network (CBCN) is controllable. We give a necessary and sufficient condition for controllability of a CBCN; an $O(n^2)$-time algorithm for testing controllability; and prove that nonetheless the minimal controllability problem for CBNs is NP-hard.
1
0
1
0
0
0
Consensus report on 25 years of searches for damped Ly$α$ galaxies in emission: Confirming their metallicity-luminosity relation at $z \gtrsim 2$
Starting from a summary of detection statistics of our recent X-shooter campaign, we review the major surveys, both space and ground based, for emission counterparts of high-redshift damped Ly$\alpha$ absorbers (DLAs) carried out since the first detection 25 years ago. We show that the detection rates of all surveys are precisely reproduced by a simple model in which the metallicity and luminosity of the galaxy associated to the DLA follow a relation of the form, ${\rm M_{UV}} = -5 \times \left(\,[{\rm M/H}] + 0.3\, \right) - 20.8$, and the DLA cross-section follows a relation of the form $\sigma_{DLA} \propto L^{0.8}$. Specifically, our spectroscopic campaign consists of 11 DLAs preselected based on their equivalent width of SiII $\lambda1526$ to have a metallicity higher than [Si/H] > -1. The targets have been observed with the X-shooter spectrograph at the Very Large Telescope to search for emission lines around the quasars. We observe a high detection rate of 64% (7/11), significantly higher than the typical $\sim$10% for random, HI-selected DLA samples. We use the aforementioned model, to simulate the results of our survey together with a range of previous surveys: spectral stacking, direct imaging (using the `double DLA' technique), long-slit spectroscopy, and integral field spectroscopy. Based on our model results, we are able to reconcile all results. Some tension is observed between model and data when looking at predictions of Ly$\alpha$ emission for individual targets. However, the object to object variations are most likely a result of the significant scatter in the underlying scaling relations as well as uncertainties in the amount of dust which affects the emission.
0
1
0
0
0
0
D-optimal designs for complex Ornstein-Uhlenbeck processes
Complex Ornstein-Uhlenbeck (OU) processes have various applications in statistical modelling. They play role e.g. in the description of the motion of a charged test particle in a constant magnetic field or in the study of rotating waves in time-dependent reaction diffusion systems, whereas Kolmogorov used such a process to model the so-called Chandler wobble, small deviation in the Earth's axis of rotation. In these applications parameter estimation and model fitting is based on discrete observations of the underlying stochastic process, however, the accuracy of the results strongly depend on the observation points. This paper studies the properties of D-optimal designs for estimating the parameters of a complex OU process with a trend. We show that in contrast with the case of the classical real OU process, a D-optimal design exists not only for the trend parameter, but also for joint estimation of the covariance parameters, moreover, these optimal designs are equidistant.
0
0
1
1
0
0
On Stein's Identity and Near-Optimal Estimation in High-dimensional Index Models
We consider estimating the parametric components of semi-parametric multiple index models in a high-dimensional and non-Gaussian setting. Such models form a rich class of non-linear models with applications to signal processing, machine learning and statistics. Our estimators leverage the score function based first and second-order Stein's identities and do not require the covariates to satisfy Gaussian or elliptical symmetry assumptions common in the literature. Moreover, to handle score functions and responses that are heavy-tailed, our estimators are constructed via carefully thresholding their empirical counterparts. We show that our estimator achieves near-optimal statistical rate of convergence in several settings. We supplement our theoretical results via simulation experiments that confirm the theory.
0
0
1
1
0
0
Self-adjointness and spectral properties of Dirac operators with magnetic links
We define Dirac operators on $\mathbb{S}^3$ (and $\mathbb{R}^3$) with magnetic fields supported on smooth, oriented links and prove self-adjointness of certain (natural) extensions. We then analyze their spectral properties and show, among other things, that these operators have discrete spectrum. Certain examples, such as circles in $\mathbb{S}^3$, are investigated in detail and we compute the dimension of the zero-energy eigenspace.
0
0
1
0
0
0
Latent tree models
Latent tree models are graphical models defined on trees, in which only a subset of variables is observed. They were first discussed by Judea Pearl as tree-decomposable distributions to generalise star-decomposable distributions such as the latent class model. Latent tree models, or their submodels, are widely used in: phylogenetic analysis, network tomography, computer vision, causal modeling, and data clustering. They also contain other well-known classes of models like hidden Markov models, Brownian motion tree model, the Ising model on a tree, and many popular models used in phylogenetics. This article offers a concise introduction to the theory of latent tree models. We emphasise the role of tree metrics in the structural description of this model class, in designing learning algorithms, and in understanding fundamental limits of what and when can be learned.
0
0
1
1
0
0
Program Synthesis from Visual Specification
Program synthesis is the process of automatically translating a specification into computer code. Traditional synthesis settings require a formal, precise specification. Motivated by computer education applications where a student learns to code simple turtle-style drawing programs, we study a novel synthesis setting where only a noisy user-intention drawing is specified. This allows students to sketch their intended output, optionally together with their own incomplete program, to automatically produce a completed program. We formulate this synthesis problem as search in the space of programs, with the score of a state being the Hausdorff distance between the program output and the user drawing. We compare several search algorithms on a corpus consisting of real user drawings and the corresponding programs, and demonstrate that our algorithms can synthesize programs optimally satisfying the specification.
1
0
0
0
0
0
Towards a Knowledge Graph based Speech Interface
Applications which use human speech as an input require a speech interface with high recognition accuracy. The words or phrases in the recognised text are annotated with a machine-understandable meaning and linked to knowledge graphs for further processing by the target application. These semantic annotations of recognised words can be represented as a subject-predicate-object triples which collectively form a graph often referred to as a knowledge graph. This type of knowledge representation facilitates to use speech interfaces with any spoken input application, since the information is represented in logical, semantic form, retrieving and storing can be followed using any web standard query languages. In this work, we develop a methodology for linking speech input to knowledge graphs and study the impact of recognition errors in the overall process. We show that for a corpus with lower WER, the annotation and linking of entities to the DBpedia knowledge graph is considerable. DBpedia Spotlight, a tool to interlink text documents with the linked open data is used to link the speech recognition output to the DBpedia knowledge graph. Such a knowledge-based speech recognition interface is useful for applications such as question answering or spoken dialog systems.
1
0
0
0
0
0
An asymptotic equipartition property for measures on model spaces
Let $G$ be a sofic group, and let $\Sigma = (\sigma_n)_{n\geq 1}$ be a sofic approximation to it. For a probability-preserving $G$-system, a variant of the sofic entropy relative to $\Sigma$ has recently been defined in terms of sequences of measures on its model spaces that `converge' to the system in a certain sense. Here we prove that, in order to study this notion, one may restrict attention to those sequences that have the asymptotic equipartition property. This may be seen as a relative of the Shannon--McMillan theorem in the sofic setting. We also give some first applications of this result, including a new formula for the sofic entropy of a $(G\times H)$-system obtained by co-induction from a $G$-system, where $H$ is any other infinite sofic group.
1
0
1
0
0
0
Dynamic Objects Segmentation for Visual Localization in Urban Environments
Visual localization and mapping is a crucial capability to address many challenges in mobile robotics. It constitutes a robust, accurate and cost-effective approach for local and global pose estimation within prior maps. Yet, in highly dynamic environments, like crowded city streets, problems arise as major parts of the image can be covered by dynamic objects. Consequently, visual odometry pipelines often diverge and the localization systems malfunction as detected features are not consistent with the precomputed 3D model. In this work, we present an approach to automatically detect dynamic object instances to improve the robustness of vision-based localization and mapping in crowded environments. By training a convolutional neural network model with a combination of synthetic and real-world data, dynamic object instance masks are learned in a semi-supervised way. The real-world data can be collected with a standard camera and requires minimal further post-processing. Our experiments show that a wide range of dynamic objects can be reliably detected using the presented method. Promising performance is demonstrated on our own and also publicly available datasets, which also shows the generalization capabilities of this approach.
1
0
0
0
0
0
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size
A key problem in research on adversarial examples is that vulnerability to adversarial examples is usually measured by running attack algorithms. Because the attack algorithms are not optimal, the attack algorithms are prone to overestimating the size of perturbation needed to fool the target model. In other words, the attack-based methodology provides an upper-bound on the size of a perturbation that will fool the model, but security guarantees require a lower bound. CLEVER is a proposed scoring method to estimate a lower bound. Unfortunately, an estimate of a bound is not a bound. In this report, we show that gradient masking, a common problem that causes attack methodologies to provide only a very loose upper bound, causes CLEVER to overestimate the size of perturbation needed to fool the model. In other words, CLEVER does not resolve the key problem with the attack-based methodology, because it fails to provide a lower bound.
0
0
0
1
0
0
Gaussian process regression for forest attribute estimation from airborne laser scanning data
While the analysis of airborne laser scanning (ALS) data often provides reliable estimates for certain forest stand attributes -- such as total volume or basal area -- there is still room for improvement, especially in estimating species-specific attributes. Moreover, while information on the estimate uncertainty would be useful in various economic and environmental analyses on forests, a computationally feasible framework for uncertainty quantifying in ALS is still missing. In this article, the species-specific stand attribute estimation and uncertainty quantification (UQ) is approached using Gaussian process regression (GPR), which is a nonlinear and nonparametric machine learning method. Multiple species-specific stand attributes are estimated simultaneously: tree height, stem diameter, stem number, basal area, and stem volume. The cross-validation results show that GPR yields on average an improvement of 4.6\% in estimate RMSE over a state-of-the-art k-nearest neighbors (kNN) implementation, negligible bias and well performing UQ (credible intervals), while being computationally fast. The performance advantage over kNN and the feasibility of credible intervals persists even when smaller training sets are used.
0
0
0
1
0
0
Quantum capacitance of double-layer graphene
We study the ground-state properties of a double layer graphene system with the Coulomb interlayer electron-electron interaction modeled within the random phase approximation. We first obtain an expression of the quantum capacitance of a two layer system. In addition, we calculate the many-body exchange-correlation energy and quantum capacitance of the hybrid double layer graphene system at zero-temperature. We show an enhancement of the majority density layer thermodynamic density-of-states owing to an increasing interlayer interaction between two layers near the Dirac point. The quantum capacitance near the neutrality point behaves like square root of the total density, $\alpha \sqrt{n}$, where the coefficient $\alpha$ decreases by increasing the charge density imbalance between two layers. Furthermore, we show that the quantum capacitance changes linearly by the gate voltage. Our results can be verified by current experiments.
0
1
0
0
0
0
DeepTriangle: A Deep Learning Approach to Loss Reserving
We propose a novel approach for loss reserving based on deep neural networks. The approach allows for jointly modeling of paid losses and claims outstanding, and incorporation of heterogenous inputs. We validate the models on loss reserving data across lines of business, and show that they attain or exceed the predictive accuracy of existing stochastic methods. The models require minimal feature engineering and expert input, and can be automated to produce forecasts at a high frequency.
0
0
0
1
0
1
Distributed Time Synchronization for Networks with Random Delays and Measurement Noise
In this paper a new distributed asynchronous algorithm is proposed for time synchronization in networks with random communication delays, measurement noise and communication dropouts. Three different types of the drift correction algorithm are introduced, based on different kinds of local time increments. Under nonrestrictive conditions concerning network properties, it is proved that all the algorithm types provide convergence in the mean square sense and with probability one (w.p.1) of the corrected drifts of all the nodes to the same value (consensus). An estimate of the convergence rate of these algorithms is derived. For offset correction, a new algorithm is proposed containing a compensation parameter coping with the influence of random delays and special terms taking care of the influence of both linearly increasing time and drift correction. It is proved that the corrected offsets of all the nodes converge in the mean square sense and w.p.1. An efficient offset correction algorithm based on consensus on local compensation parameters is also proposed. It is shown that the overall time synchronization algorithm can also be implemented as a flooding algorithm with one reference node. It is proved that it is possible to achieve bounded error between local corrected clocks in the mean square sense and w.p.1. Simulation results provide an additional practical insight into the algorithm properties and show its advantage over the existing methods.
1
0
0
0
0
0
Using High-Rising Cities to Visualize Performance in Real-Time
For developers concerned with a performance drop or improvement in their software, a profiler allows a developer to quickly search and identify bottlenecks and leaks that consume much execution time. Non real-time profilers analyze the history of already executed stack traces, while a real-time profiler outputs the results concurrently with the execution of software, so users can know the results instantaneously. However, a real-time profiler risks providing overly large and complex outputs, which is difficult for developers to quickly analyze. In this paper, we visualize the performance data from a real-time profiler. We visualize program execution as a three-dimensional (3D) city, representing the structure of the program as artifacts in a city (i.e., classes and packages expressed as buildings and districts) and their program executions expressed as the fluctuating height of artifacts. Through two case studies and using a prototype of our proposed visualization, we demonstrate how our visualization can easily identify performance issues such as a memory leak and compare performance changes between versions of a program. A demonstration of the interactive features of our prototype is available at this https URL.
1
0
0
0
0
0
Controller Synthesis for Discrete-time Hybrid Polynomial Systems via Occupation Measures
We present a novel controller synthesis approach for discrete-time hybrid polynomial systems, a class of systems that can model a wide variety of interactions between robots and their environment. The approach is rooted in recently developed techniques that use occupation measures to formulate the controller synthesis problem as an infinite-dimensional linear program. The relaxation of the linear program as a finite-dimensional semidefinite program can be solved to generate a control law. The approach has several advantages including that the formulation is convex, that the formulation and the extracted controllers are simple, and that the computational complexity is polynomial in the state and control input dimensions. We illustrate our approach on some robotics examples.
1
0
0
0
0
0
Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning
Recently, research on accelerated stochastic gradient descent methods (e.g., SVRG) has made exciting progress (e.g., linear convergence for strongly convex problems). However, the best-known methods (e.g., Katyusha) requires at least two auxiliary variables and two momentum parameters. In this paper, we propose a fast stochastic variance reduction gradient (FSVRG) method, in which we design a novel update rule with the Nesterov's momentum and incorporate the technique of growing epoch size. FSVRG has only one auxiliary variable and one momentum weight, and thus it is much simpler and has much lower per-iteration complexity. We prove that FSVRG achieves linear convergence for strongly convex problems and the optimal $\mathcal{O}(1/T^2)$ convergence rate for non-strongly convex problems, where $T$ is the number of outer-iterations. We also extend FSVRG to directly solve the problems with non-smooth component functions, such as SVM. Finally, we empirically study the performance of FSVRG for solving various machine learning problems such as logistic regression, ridge regression, Lasso and SVM. Our results show that FSVRG outperforms the state-of-the-art stochastic methods, including Katyusha.
1
0
1
1
0
0
A vertex-weighted-Least-Squares gradient reconstruction
Gradient reconstruction is a key process for the spatial accuracy and robustness of finite volume method, especially in industrial aerodynamic applications in which grid quality affects reconstruction methods significantly. A novel gradient reconstruction method for cell-centered finite volume scheme is introduced. This method is composed of two successive steps. First, a vertex-based weighted-least-squares procedure is implemented to calculate vertex gradients, and then the cell-centered gradients are calculated by an arithmetic averaging procedure. By using these two procedures, extended stencils are implemented in the calculations, and the accuracy of gradient reconstruction is improved by the weighting procedure. In the given test cases, the proposed method is showing improvement on both the accuracy and convergence. Furthermore, the method could be extended to the calculation of viscous fluxes.
0
1
0
0
0
0
The reverse mathematics of theorems of Jordan and Lebesgue
The Jordan decomposition theorem states that every function $f \colon [0,1] \to \mathbb{R}$ of bounded variation can be written as the difference of two non-decreasing functions. Combining this fact with a result of Lebesgue, every function of bounded variation is differentiable almost everywhere in the sense of Lebesgue measure. We analyze the strength of these theorems in the setting of reverse mathematics. Over $\mathsf{RCA}_0$, a stronger version of Jordan's result where all functions are continuous is equivalent to $\mathsf{ACA}_0$, while the version stated is equivalent to $\mathsf{WKL}_0$. The result that every function on $[0,1]$ of bounded variation is almost everywhere differentiable is equivalent to $\mathsf{WWKL}_0$. To state this equivalence in a meaningful way, we develop a theory of Martin-Löf randomness over $\mathsf{RCA}_0$.
0
0
1
0
0
0
Semiflat Orbifold Projections
We compute the semiflat positive cone $K_0^{+SF}(A_\theta^\sigma)$ of the $K_0$-group of the irrational rotation orbifold $A_\theta^\sigma$ under the noncommutative Fourier transform $\sigma$ and show that it is determined by classes of positive trace and the vanishing of two topological invariants. The semiflat orbifold projections are 3-dimensional and come in three basic topological genera: $(2,0,0)$, $(1,1,2)$, $(0,0,2)$. (A projection is called semiflat when it has the form $h + \sigma(h)$ where $h$ is a flip-invariant projection such that $h\sigma(h)=0$.) Among other things, we also show that every number in $(0,1) \cap (2\mathbb Z + 2\mathbb Z\theta)$ is the trace of a semiflat projection in $A_\theta$. The noncommutative Fourier transform is the order 4 automorphism $\sigma: V \to U \to V^{-1}$ (and the flip is $\sigma^2$: $U \to U^{-1},\ V \to V^{-1}$), where $U,V$ are the canonical unitary generators of the rotation algebra $A_\theta$ satisfying $VU = e^{2\pi i\theta} UV$.
0
0
1
0
0
0
Magnetization process of the S = 1/2 two-leg organic spin-ladder compound BIP-BNO
We have measured the magnetization of the organic compound BIP-BNO (3,5'-bis(N-tert-butylaminoxyl)-3',5-dibromobiphenyl) up to 76 T where the magnetization is saturated. The S = 1/2 antiferromagnetic Heisenberg two-leg spin-ladder model accounts for the obtained experimental data regarding the magnetization curve, which is clarified using the quantum Monte Carlo method. The exchange constants on the rung and the side rail of the ladder are estimated to be J(rung)/kB = 65.7 K and J(leg)/kB = 14.1 K, respectively, deeply in the strong coupling region: J(rung)/J(leg) > 1.
0
1
0
0
0
0
Deep Over-sampling Framework for Classifying Imbalanced Data
Class imbalance is a challenging issue in practical classification problems for deep learning models as well as traditional models. Traditionally successful countermeasures such as synthetic over-sampling have had limited success with complex, structured data handled by deep learning models. In this paper, we propose Deep Over-sampling (DOS), a framework for extending the synthetic over-sampling method to exploit the deep feature space acquired by a convolutional neural network (CNN). Its key feature is an explicit, supervised representation learning, for which the training data presents each raw input sample with a synthetic embedding target in the deep feature space, which is sampled from the linear subspace of in-class neighbors. We implement an iterative process of training the CNN and updating the targets, which induces smaller in-class variance among the embeddings, to increase the discriminative power of the deep representation. We present an empirical study using public benchmarks, which shows that the DOS framework not only counteracts class imbalance better than the existing method, but also improves the performance of the CNN in the standard, balanced settings.
1
0
0
1
0
0
Failsafe Mechanism Design of Multicopters Based on Supervisory Control Theory
In order to handle undesirable failures of a multicopter which occur in either the pre-flight process or the in-flight process, a failsafe mechanism design method based on supervisory control theory is proposed for the semi-autonomous control mode. Failsafe mechanism is a control logic that guides what subsequent actions the multicopter should take, by taking account of real-time information from guidance, attitude control, diagnosis, and other low-level subsystems. In order to design a failsafe mechanism for multicopters, safety issues of multicopters are introduced. Then, user requirements including functional requirements and safety requirements are textually described, where function requirements determine a general multicopter plant, and safety requirements cover the failsafe measures dealing with the presented safety issues. In order to model the user requirements by discrete-event systems, several multicopter modes and events are defined. On this basis, the multicopter plant and control specifications are modeled by automata. Then, a supervisor is synthesized by monolithic supervisory control theory. In addition, we present three examples to demonstrate the potential blocking phenomenon due to inappropriate design of control specifications. Also, we discuss the meaning of correctness and the properties of the obtained supervisor. This makes the failsafe mechanism convincingly correct and effective. Finally, based on the obtained supervisory controller generated by TCT software, an implementation method suitable for multicopters is presented, in which the supervisory controller is transformed into decision-making codes.
1
0
0
0
0
0
Inverse Design of Single- and Multi-Rotor Horizontal Axis Wind Turbine Blades using Computational Fluid Dynamics
A method for inverse design of horizontal axis wind turbines (HAWTs) is presented in this paper. The direct solver for aerodynamic analysis solves the Reynolds Averaged Navier Stokes (RANS) equations, where the effect of the turbine rotor is modeled as momentum sources using the actuator disk model (ADM); this approach is referred to as RANS/ADM. The inverse problem is posed as follows: for a given selection of airfoils, the objective is to find the blade geometry (described as blade twist and chord distributions) which realizes the desired turbine aerodynamic performance at the design point; the desired performance is prescribed as angle of attack ($\alpha$) and axial induction factor ($a$) distributions along the blade. An iterative approach is used. An initial estimate of blade geometry is used with the direct solver (RANS/ADM) to obtain $\alpha$ and $a$. The differences between the calculated and desired values of $\alpha$ and $a$ are computed and a new estimate for the blade geometry (chord and twist) is obtained via nonlinear least squares regression using the Trust-Region-Reflective (TRF) method. This procedure is continued until the difference between the calculated and the desired values is within acceptable tolerance. The method is demonstrated for conventional, single-rotor HAWTs and then extended to multi-rotor, specifically dual-rotor wind turbines. The TRF method is also compared with the multi-dimensional Newton iteration method and found to provide better convergence when constraints are imposed in blade design, although faster convergence is obtained with the Newton method for unconstrained optimization.
0
1
0
0
0
0
The Double Galaxy Cluster Abell 2465 III. X-ray and Weak-lensing Observations
We report Chandra X-ray observations and optical weak-lensing measurements from Subaru/Suprime-Cam images of the double galaxy cluster Abell 2465 (z=0.245). The X-ray brightness data are fit to a beta-model to obtain the radial gas density profiles of the northeast (NE) and southwest (SW) sub-components, which are seen to differ in structure. We determine core radii, central temperatures, the gas masses within $r_{500c}$, and the total masses for the broader NE and sharper SW components assuming hydrostatic equilibrium. The central entropy of the NE clump is about two times higher than the SW. Along with its structural properties, this suggests that it has undergone merging on its own. The weak-lensing analysis gives virial masses for each substructure, which compare well with earlier dynamical results. The derived outer mass contours of the SW sub-component from weak lensing are more irregular and extended than those of the NE. Although there is a weak enhancement and small offsets between X-ray gas and mass centers from weak lensing, the lack of large amounts of gas between the two sub-clusters indicates that Abell 2465 is in a pre-merger state. A dynamical model that is consistent with the observed cluster data, based on the FLASH program and the radial infall model, is constructed, where the subclusters currently separated by ~1.2Mpc are approaching each other at ~2000km/s and will meet in ~0.4Gyr.
0
1
0
0
0
0
Superdensity Operators for Spacetime Quantum Mechanics
We introduce superdensity operators as a tool for analyzing quantum information in spacetime. Superdensity operators encode spacetime correlation functions in an operator framework, and support a natural generalization of Hilbert space techniques and Dirac's transformation theory as traditionally applied to standard density operators. Superdensity operators can be measured experimentally, but accessing their full content requires novel procedures. We demonstrate these statements on several examples. The superdensity formalism suggests useful definitions of spacetime entropies and spacetime quantum channels. For example, we show that the von Neumann entropy of a superdensity operator is related to a quantum generalization of the Kolmogorov-Sinai entropy, and compute this for a many-body system. We also suggest experimental protocols for measuring spacetime entropies.
0
1
0
0
0
0
Monte Carlo study of magnetic nanoparticles adsorbed on halloysite $Al_2Si_2O_5(OH)_4$ nanotubes
We study properties of magnetic nanoparticles adsorbed on the halloysite surface. For that a distinct magnetic Hamiltonian with random distribution of spins on a cylindrical surface was solved by using a nonequilibrium Monte Carlo method. The parameters for our simulations: anisotropy constant, nanoparticle size distribution, saturated magnetization and geometrical parameters of the halloysite template were taken from recent experiments. We calculate the hysteresis loops and temperature dependence of the zero field cooling (ZFC) susceptibility, which maximum determines the blocking temperature. It is shown that the dipole-dipole interaction between nanoparticles moderately increases the blocking temperature and weakly increases the coercive force. The obtained hysteresis loops (e.g., the value of the coercive force) for Ni nanoparticles are in reasonable agreement with the experimental data. We also discuss the sensitivity of the hysteresis loops and ZFC susceptibilities to the change of anisotropy and dipole-dipole interaction, as well as the 3d-shell occupation of the metallic nanoparticles; in particular we predict larger coercive force for Fe, than for Ni nanoparticles.
0
1
0
0
0
0
Inference in high-dimensional linear regression models
We introduce an asymptotically unbiased estimator for the full high-dimensional parameter vector in linear regression models where the number of variables exceeds the number of available observations. The estimator is accompanied by a closed-form expression for the covariance matrix of the estimates that is free of tuning parameters. This enables the construction of confidence intervals that are valid uniformly over the parameter vector. Estimates are obtained by using a scaled Moore-Penrose pseudoinverse as an approximate inverse of the singular empirical covariance matrix of the regressors. The approximation induces a bias, which is then corrected for using the lasso. Regularization of the pseudoinverse is shown to yield narrower confidence intervals under a suitable choice of the regularization parameter. The methods are illustrated in Monte Carlo experiments and in an empirical example where gross domestic product is explained by a large number of macroeconomic and financial indicators.
0
0
1
1
0
0
Counterfactual Learning for Machine Translation: Degeneracies and Solutions
Counterfactual learning is a natural scenario to improve web-based machine translation services by offline learning from feedback logged during user interactions. In order to avoid the risk of showing inferior translations to users, in such scenarios mostly exploration-free deterministic logging policies are in place. We analyze possible degeneracies of inverse and reweighted propensity scoring estimators, in stochastic and deterministic settings, and relate them to recently proposed techniques for counterfactual learning under deterministic logging.
1
0
0
1
0
0
Conversion of Mersenne Twister to double-precision floating-point numbers
The 32-bit Mersenne Twister generator MT19937 is a widely used random number generator. To generate numbers with more than 32 bits in bit length, and particularly when converting into 53-bit double-precision floating-point numbers in $[0,1)$ in the IEEE 754 format, the typical implementation concatenates two successive 32-bit integers and divides them by a power of $2$. In this case, the 32-bit MT19937 is optimized in terms of its equidistribution properties (the so-called dimension of equidistribution with $v$-bit accuracy) under the assumption that one will mainly be using 32-bit output values, and hence the concatenation sometimes degrades the dimension of equidistribution compared with the simple use of 32-bit outputs. In this paper, we analyze such phenomena by investigating hidden $\mathbb{F}_2$-linear relations among the bits of high-dimensional outputs. Accordingly, we report that MT19937 with a specific lag set fails several statistical tests, such as the overlapping collision test, matrix rank test, and Hamming independence test.
1
0
0
1
0
0
BaHaMAS: A Bash Handler to Monitor and Administrate Simulations
Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.
0
1
0
0
0
0
Computational Study of Halide Perovskite-Derived A$_2$BX$_6$ Inorganic Compounds: Chemical Trends in Electronic Structure and Structural Stability
The electronic structure and energetic stability of A$_2$BX$_6$ halide compounds with the cubic and tetragonal variants of the perovskite-derived K$_2$PtCl$_6$ prototype structure are investigated computationally within the frameworks of density-functional-theory (DFT) and hybrid (HSE06) functionals. The HSE06 calculations are undertaken for seven known A$_2$BX$_6$ compounds with A = K, Rb and Cs, and B = Sn, Pd, Pt, Te, and X = I. Trends in band gaps and energetic stability are identified, which are explored further employing DFT calculations over a larger range of chemistries, characterized by A = K, Rb, Cs, B = Si, Ge, Sn, Pb, Ni, Pd, Pt, Se and Te and X = Cl, Br, I. For the systems investigated in this work, the band gap increases from iodide to bromide to chloride. Further, variations in the A site cation influences the band gap as well as the preferred degree of tetragonal distortion. Smaller A site cations such as K and Rb favor tetragonal structural distortions, resulting in a slightly larger band gap. For variations in the B site in the (Ni, Pd, Pt) group and the (Se, Te) group, the band gap increases with increasing cation size. However, no observed chemical trend with respect to cation size for band gap was found for the (Si, Sn, Ge, Pb) group. The findings in this work provide guidelines for the design of halide A$_2$BX$_6$ compounds for potential photovoltaic applications.
0
1
0
0
0
0