title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
A Decision Tree Based Approach Towards Adaptive Profiling of Distributed Applications
The adoption of the distributed paradigm has allowed applications to increase their scalability, robustness and fault tolerance, but it has also complicated their structure, leading to an exponential growth of the applications' configuration space and increased difficulty in predicting their performance. In this work, we describe a novel, automated profiling methodology that makes no assumptions on application structure. Our approach utilizes oblique Decision Trees in order to recursively partition an application's configuration space in disjoint regions, choose a set of representative samples from each subregion according to a defined policy and return a model for the entire space as a composition of linear models over each subregion. An extensive evaluation over real-life applications and synthetic performance functions showcases that our scheme outperforms other state-of-the-art profiling methodologies. It particularly excels at reflecting abnormalities and discontinuities of the performance function, allowing the user to influence the sampling policy based on the modeling accuracy and the space coverage.
1
0
0
0
0
0
A Tractable Approach to Dynamic Network Dimensioning Based on the Best-cell Configuration
Spatial distributions of other cell interference (OCIF) and interference to own-cell power ratio (IOPR) with reference to the distance between a mobile and its serving base station (BS) are modeled for the down-link reception of cellular systems based on the best-cell configuration instead of the nearest-cell configuration. This enables a more realistic evaluation of two competing objectives in network dimensioning: coverage and rate capacity. More outcomes useful for dynamic network dimensioning are also derived, including maximum BS transmission power per cell size and the cell density required for an adequate coverage of a given traffic density.
1
0
0
0
0
0
Parametric uncertainty in complex environmental models: a cheap emulation approach for models with high-dimensional output
In order to understand underlying processes governing environmental and physical processes, and predict future outcomes, a complex computer model is frequently required to simulate these dynamics. However there is inevitably uncertainty related to the exact parametric form or the values of such parameters to be used when developing these simulators, with \emph{ranges} of plausible values prevalent in the literature. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. Due to the complexity of these types of models, it is often unfeasible to run large numbers of training runs that are usually required for full statistical emulators of the environmental processes. We therefore present a method for accounting for uncertainties in complex environmental simulators without the need for very large numbers of training runs and illustrate the method through an application to the Met Office's atmospheric transport model NAME. We conclude that there are two principle parameters that are linked with variability in NAME outputs, namely the free tropospheric turbulence parameter and particle release height. Our results suggest the former should be significantly larger than is currently implemented as a default in NAME, whilst changes in the latter most likely stem from inconsistencies between the model specified ground height at the observation locations and the true height at this location. Estimated discrepancies from independent data are consistent with the discrepancy between modelled and true ground height.
0
0
0
1
0
0
Prediction and Control with Temporal Segment Models
We introduce a method for learning the dynamics of complex nonlinear systems based on deep generative models over temporal segments of states and actions. Unlike dynamics models that operate over individual discrete timesteps, we learn the distribution over future state trajectories conditioned on past state, past action, and planned future action trajectories, as well as a latent prior over action trajectories. Our approach is based on convolutional autoregressive models and variational autoencoders. It makes stable and accurate predictions over long horizons for complex, stochastic systems, effectively expressing uncertainty and modeling the effects of collisions, sensory noise, and action delays. The learned dynamics model and action prior can be used for end-to-end, fully differentiable trajectory optimization and model-based policy optimization, which we use to evaluate the performance and sample-efficiency of our method.
1
0
0
1
0
0
Artificial Intelligence and Statistics
Artificial intelligence (AI) is intrinsically data-driven. It calls for the application of statistical concepts through human-machine collaboration during generation of data, development of algorithms, and evaluation of results. This paper discusses how such human-machine collaboration can be approached through the statistical concepts of population, question of interest, representativeness of training data, and scrutiny of results (PQRS). The PQRS workflow provides a conceptual framework for integrating statistical ideas with human input into AI products and research. These ideas include experimental design principles of randomization and local control as well as the principle of stability to gain reproducibility and interpretability of algorithms and data results. We discuss the use of these principles in the contexts of self-driving cars, automated medical diagnoses, and examples from the authors' collaborative research.
1
0
0
1
0
0
Shortcut Sequence Tagging
Deep stacked RNNs are usually hard to train. Adding shortcut connections across different layers is a common way to ease the training of stacked networks. However, extra shortcuts make the recurrent step more complicated. To simply the stacked architecture, we propose a framework called shortcut block, which is a marriage of the gating mechanism and shortcuts, while discarding the self-connected part in LSTM cell. We present extensive empirical experiments showing that this design makes training easy and improves generalization. We propose various shortcut block topologies and compositions to explore its effectiveness. Based on this architecture, we obtain a 6% relatively improvement over the state-of-the-art on CCGbank supertagging dataset. We also get comparable results on POS tagging task.
1
0
0
0
0
0
Distortions of the Cosmic Microwave Background through cooling lines during the epoch of Reionization
By using N-body hydrodynamical cosmological simulations in which the chemistry of major metals and molecules is consistently solved for, we study the interaction of metallic fine-structure lines with the CMB. Our analysis shows that the collisional induced emissions in the OI 145 $\mu$m and CII 158 $\mu$m lines during reionization introduce a distortion of the CMB spectrum at low frequencies ($\nu < 300$ GHz) with amplitudes up to $\Delta I_{\nu}/B_{\nu}(T_{\rm CMB})\sim 10^{-8}$-$10^{-7}$, i.e., at the $\sim 0.1$ percent level of FIRAS upper limits. Shorter wavelength fine-structure transitions (OI 63 $\mu$m, FeII 26 $\mu$m, and SiII 35 $\mu$m) typically sample the reionization epoch at higher observing frequencies ($\nu > 400$ GHz). This corresponds to the Wien tail of the CMB spectrum and the distortion level induced by those lines may be as high as $\Delta I_{\nu}/B_{\nu}(T_{\rm CMB})\sim 10^{-4}$. The angular anisotropy produced by these lines should be more relevant at higher frequencies: while practically negligible at $\nu=145 $GHz, signatures from CII 158 $\mu$m and OI 145 $\mu$m should amount to 1%-5% of the anisotropy power measured at $l \sim 5000$ and $\nu=220 $GHz by the ACT and SPT collaborations (after assuming $\Delta \nu_{\rm obs}/\nu_{\rm obs}\simeq 0.005$ for the line observations). Our simulations show that anisotropy maps from different lines (e.g., OI 145 $\mu$m and CII 158 $\mu$m) at the same redshift show a very high degree ($>0.8$) of spatial correlation, allowing for the use of observations at different frequencies to unveil the same snapshot of the reionization epoch. Finally, our simulations demonstrate that line-emission anisotropies extracted in narrow frequency/redshift shells are practically uncorrelated in frequency space, thus enabling standard methods for removal of foregrounds that vary smoothly in frequency, just as in HI 21 cm studies.
0
1
0
0
0
0
Improving brain computer interface performance by data augmentation with conditional Deep Convolutional Generative Adversarial Networks
One of the big restrictions in brain computer interface field is the very limited training samples, it is difficult to build a reliable and usable system with such limited data. Inspired by generative adversarial networks, we propose a conditional Deep Convolutional Generative Adversarial (cDCGAN) Networks method to generate more artificial EEG signal automatically for data augmentation to improve the performance of convolutional neural networks in brain computer interface field and overcome the small training dataset problems. We evaluate the proposed cDCGAN method on BCI competition dataset of motor imagery. The results show that the generated artificial EEG data from Gaussian noise can learn the features from raw EEG data and has no less than the classification accuracy of raw EEG data in the testing dataset. Also by using generated artificial data can effectively improve classification accuracy at the same model with limited training data.
0
0
0
1
1
0
From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood
Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task.
1
0
0
1
0
0
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data
Object Transfiguration replaces an object in an image with another object from a second image. For example it can perform tasks like "putting exactly those eyeglasses from image A on the nose of the person in image B". Usage of exemplar images allows more precise specification of desired modifications and improves the diversity of conditional image generation. However, previous methods that rely on feature space operations, require paired data and/or appearance models for training or disentangling objects from background. In this work, we propose a model that can learn object transfiguration from two unpaired sets of images: one set containing images that "have" that kind of object, and the other set being the opposite, with the mild constraint that the objects be located approximately at the same place. For example, the training data can be one set of reference face images that have eyeglasses, and another set of images that have not, both of which spatially aligned by face landmarks. Despite the weak 0/1 labels, our model can learn an "eyeglasses" subspace that contain multiple representatives of different types of glasses. Consequently, we can perform fine-grained control of generated images, like swapping the glasses in two images by swapping the projected components in the "eyeglasses" subspace, to create novel images of people wearing eyeglasses. Overall, our deterministic generative model learns disentangled attribute subspaces from weakly labeled data by adversarial training. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, in generating images with specified eyeglasses, smiling, hair styles, and lighting conditions etc. The code is available online.
1
0
0
0
0
0
Flow-free Video Object Segmentation
Segmenting foreground object from a video is a challenging task because of the large deformations of the objects, occlusions, and background clutter. In this paper, we propose a frame-by-frame but computationally efficient approach for video object segmentation by clustering visually similar generic object segments throughout the video. Our algorithm segments various object instances appearing in the video and then perform clustering in order to group visually similar segments into one cluster. Since the object that needs to be segmented appears in most part of the video, we can retrieve the foreground segments from the cluster having maximum number of segments, thus filtering out noisy segments that do not represent any object. We then apply a track and fill approach in order to localize the objects in the frames where the object segmentation framework fails to segment any object. Our algorithm performs comparably to the recent automatic methods for video object segmentation when benchmarked on DAVIS dataset while being computationally much faster.
1
0
0
0
0
0
Finding Modes by Probabilistic Hypergraphs Shifting
In this paper, we develop a novel paradigm, namely hypergraph shift, to find robust graph modes by probabilistic voting strategy, which are semantically sound besides the self-cohesiveness requirement in forming graph modes. Unlike the existing techniques to seek graph modes by shifting vertices based on pair-wise edges (i.e, an edge with $2$ ends), our paradigm is based on shifting high-order edges (hyperedges) to deliver graph modes. Specifically, we convert the problem of seeking graph modes as the problem of seeking maximizers of a novel objective function with the aim to generate good graph modes based on sifting edges in hypergraphs. As a result, the generated graph modes based on dense subhypergraphs may more accurately capture the object semantics besides the self-cohesiveness requirement. We also formally prove that our technique is always convergent. Extensive empirical studies on synthetic and real world data sets are conducted on clustering and graph matching. They demonstrate that our techniques significantly outperform the existing techniques.
1
0
0
0
0
0
Bounding the size of an almost-equidistant set in Euclidean space
A set of points in d-dimensional Euclidean space is almost equidistant if among any three points of the set, some two are at distance 1. We show that an almost-equidistant set in $\mathbb{R}^d$ has cardinality $O(d^{4/3})$.
1
0
1
0
0
0
Equipping weak equivalences with algebraic structure
We investigate the extent to which the weak equivalences in a model category can be equipped with algebraic structure. We prove, for instance, that there exists a monad T such that a morphism of topological spaces admits T-algebra structure if and only it is a weak homotopy equivalence. Likewise for quasi-isomorphisms and many other examples. The basic trick is to consider injectivity in arrow categories. Using algebraic injectivity and cone injectivity we obtain general results about the extent to which the weak equivalences in a combinatorial model category can be equipped with algebraic structure.
0
0
1
0
0
0
An Experimental Study of the Treewidth of Real-World Graph Data (Extended Version)
Treewidth is a parameter that measures how tree-like a relational instance is, and whether it can reasonably be decomposed into a tree. Many computation tasks are known to be tractable on databases of small treewidth, but computing the treewidth of a given instance is intractable. This article is the first large-scale experimental study of treewidth and tree decompositions of real-world database instances (25 datasets from 8 different domains, with sizes ranging from a few thousand to a few million vertices). The goal is to determine which data, if any, can benefit of the wealth of algorithms for databases of small treewidth. For each dataset, we obtain upper and lower bound estimations of their treewidth, and study the properties of their tree decompositions. We show in particular that, even when treewidth is high, using partial tree decompositions can result in data structures that can assist algorithms.
1
0
0
0
0
0
A yield-cost tradeoff governs Escherichia coli's decision between fermentation and respiration in carbon-limited growth
Many microbial systems are known to actively reshape their proteomes in response to changes in growth conditions induced e.g. by nutritional stress or antibiotics. Part of the re-allocation accounts for the fact that, as the growth rate is limited by targeting specific metabolic activities, cells simply respond by fine-tuning their proteome to invest more resources into the limiting activity (i.e. by synthesizing more proteins devoted to it). However, this is often accompanied by an overall re-organization of metabolism, aimed at improving the growth yield under limitation by re-wiring resource through different pathways. While both effects impact proteome composition, the latter underlies a more complex systemic response to stress. By focusing on E. coli's `acetate switch', we use mathematical modeling and a re-analysis of empirical data to show that the transition from a predominantly fermentative to a predominantly respirative metabolism in carbon-limited growth results from the trade-off between maximizing the growth yield and minimizing its costs in terms of required the proteome share. In particular, E. coli's metabolic phenotypes appear to be Pareto-optimal for these objective functions over a broad range of dilutions.
0
1
0
0
0
0
Towards a Unified Taxonomy of Biclustering Methods
Being an unsupervised machine learning and data mining technique, biclustering and its multimodal extensions are becoming popular tools for analysing object-attribute data in different domains. Apart from conventional clustering techniques, biclustering is searching for homogeneous groups of objects while keeping their common description, e.g., in binary setting, their shared attributes. In bioinformatics, biclustering is used to find genes, which are active in a subset of situations, thus being candidates for biomarkers. However, the authors of those biclustering techniques that are popular in gene expression analysis, may overlook the existing methods. For instance, BiMax algorithm is aimed at finding biclusters, which are well-known for decades as formal concepts. Moreover, even if bioinformatics classify the biclustering methods according to reasonable domain-driven criteria, their classification taxonomies may be different from survey to survey and not full as well. So, in this paper we propose to use concept lattices as a tool for taxonomy building (in the biclustering domain) and attribute exploration as means for cross-domain taxonomy completion.
1
0
0
1
0
0
Global center stable manifold for the defocusing energy critical wave equation with potential
In this paper we consider the defocusing energy critical wave equation with a trapping potential in dimension $3$. We prove that the set of initial data for which solutions scatter to an unstable excited state $(\phi, 0)$ forms a finite co-dimensional path connected $C^1$ manifold in the energy space. This manifold is a global and unique center-stable manifold associated with $(\phi,0)$. It is constructed in a first step locally around any solution scattering to $\phi$, which might be very far away from $\phi$ in the $\dot{H}^1\times L^2(\mathbb{R}^3)$ norm. In a second crucial step a no-return property is proved for any solution which starts near, but not on the local manifolds. This ensures that the local manifolds form a global one. Scattering to an unstable steady state is therefore a non-generic behavior, in a strong topological sense in the energy space. This extends our previous result [18] to the nonradial case. The new ingredients here are (i) application of the reversed Strichartz estimate from [3] to construct a local center stable manifold near any solution that scatters to $(\phi, 0)$. This is needed since the endpoint of the standard Strichartz estimates fails nonradially. (ii) The nonradial channel of energy estimate introduced by Duyckaerts-Kenig-Merle [14], which is used to show that solutions that start off but near the local manifolds associated with $\phi$ emit some amount of energy into the far field in excess of the amount of energy beyond that of the steady state $\phi$.
0
0
1
0
0
0
Real-time convolutional networks for sonar image classification in low-power embedded systems
Deep Neural Networks have impressive classification performance, but this comes at the expense of significant computational resources at inference time. Autonomous Underwater Vehicles use low-power embedded systems for sonar image perception, and cannot execute large neural networks in real-time. We propose the use of max-pooling aggressively, and we demonstrate it with a Fire-based module and a new Tiny module that includes max-pooling in each module. By stacking them we build networks that achieve the same accuracy as bigger ones, while reducing the number of parameters and considerably increasing computational performance. Our networks can classify a 96x96 sonar image with 98.8 - 99.7 accuracy on only 41 to 61 milliseconds on a Raspberry Pi 2, which corresponds to speedups of 28.6 - 19.7.
1
0
0
0
0
0
Mathematics in Caging of Robotics
It is a crucial problem in robotics field to cage an object using robots like multifingered hand. However the problem what is the caging for general geometrical objects and robots has not been well-described in mathematics though there were many rigorous studies on the methods how to cage an object by certain robots. In this article, we investigate the caging problem more mathematically and describe the problem in terms of recursion of the simple euclidean moves. Using the description, we show that the caging has the degree of difficulty which is closely related to a combinatorial problem and a wire puzzle. It implies that in order to capture an object by caging, from a practical viewpoint the difficulty plays an important role.
1
0
1
0
0
0
Time Series Prediction for Graphs in Kernel and Dissimilarity Spaces
Graph models are relevant in many fields, such as distributed computing, intelligent tutoring systems or social network analysis. In many cases, such models need to take changes in the graph structure into account, i.e. a varying number of nodes or edges. Predicting such changes within graphs can be expected to yield important insight with respect to the underlying dynamics, e.g. with respect to user behaviour. However, predictive techniques in the past have almost exclusively focused on single edges or nodes. In this contribution, we attempt to predict the future state of a graph as a whole. We propose to phrase time series prediction as a regression problem and apply dissimilarity- or kernel-based regression techniques, such as 1-nearest neighbor, kernel regression and Gaussian process regression, which can be applied to graphs via graph kernels. The output of the regression is a point embedded in a pseudo-Euclidean space, which can be analyzed using subsequent dissimilarity- or kernel-based processing methods. We discuss strategies to speed up Gaussian Processes regression from cubic to linear time and evaluate our approach on two well-established theoretical models of graph evolution as well as two real data sets from the domain of intelligent tutoring systems. We find that simple regression methods, such as kernel regression, are sufficient to capture the dynamics in the theoretical models, but that Gaussian process regression significantly improves the prediction error for real-world data.
1
0
0
0
0
0
Geometric comparison of phylogenetic trees with different leaf sets
The metric space of phylogenetic trees defined by Billera, Holmes, and Vogtmann, which we refer to as BHV space, provides a natural geometric setting for describing collections of trees on the same set of taxa. However, it is sometimes necessary to analyze collections of trees on non-identical taxa sets (i.e., with different numbers of leaves), and in this context it is not evident how to apply BHV space. Davidson et al. recently approached this problem by describing a combinatorial algorithm extending tree topologies to regions in higher dimensional tree spaces, so that one can quickly compute which topologies contain a given tree as partial data. In this paper, we refine and adapt their algorithm to work for metric trees to give a full characterization of the subspace of extensions of a subtree. We describe how to apply our algorithm to define and search a space of possible supertrees and, for a collection of tree fragments with different leaf sets, to measure their compatibility.
0
0
0
0
1
0
Automatic Detection of Cyberbullying in Social Media Text
While social media offer great communication opportunities, they also increase the vulnerability of young people to threatening situations online. Recent studies report that cyberbullying constitutes a growing problem among youngsters. Successful prevention depends on the adequate detection of potentially harmful messages and the information overload on the Web requires intelligent systems to identify potential risks automatically. The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a training corpus for English and Dutch and perform a series of binary classification experiments to determine the feasibility of automatic cyberbullying detection. We make use of linear support vector machines exploiting a rich feature set and investigate which information sources contribute the most for this particular task. Experiments on a holdout test set reveal promising results for the detection of cyberbullying-related posts. After optimisation of the hyperparameters, the classifier yields an F1-score of 64% and 61% for English and Dutch respectively, and considerably outperforms baseline systems based on keywords and word unigrams.
1
0
0
0
0
0
Holographic Entanglement Entropy in Cyclic Cosmology
We discuss a cyclic cosmology in which the visible universe, or introverse, is all that is accessible to an observer while the extroverse represents the total spacetime originating from the time when the dark energy began to dominate. It is argued that entanglement entropy of the introverse is the more appropriate quantity to render infinitely cyclic, rather than the entropy of the total universe. Since vanishing entanglement entropy implies disconnected spacetimes, at the turnaround when the introverse entropy is zero the disconnected extroverse can be jettisoned with impunity.
0
1
0
0
0
0
Adversarial Training for Disease Prediction from Electronic Health Records with Missing Data
Electronic health records (EHRs) have contributed to the computerization of patient records and can thus be used not only for efficient and systematic medical services, but also for research on biomedical data science. However, there are many missing values in EHRs when provided in matrix form, which is an important issue in many biomedical EHR applications. In this paper, we propose a two-stage framework that includes missing data imputation and disease prediction to address the missing data problem in EHRs. We compared the disease prediction performance of generative adversarial networks (GANs) and conventional learning algorithms in combination with missing data prediction methods. As a result, we obtained a level of accuracy of 0.9777, sensitivity of 0.9521, specificity of 0.9925, area under the receiver operating characteristic curve (AUC-ROC) of 0.9889, and F-score of 0.9688 with a stacked autoencoder as the missing data prediction method and an auxiliary classifier GAN (AC-GAN) as the disease prediction method. The comparison results show that a combination of a stacked autoencoder and an AC-GAN significantly outperforms other existing approaches. Our results suggest that the proposed framework is more robust for disease prediction from EHRs with missing data.
1
0
0
1
0
0
Hydrodynamic stability in the presence of a stochastic forcing:a case study in convection
We investigate the stability of a statistically stationary conductive state for Rayleigh-Bénard convection between stress-free plates that arises due to a bulk stochastic internal heating. This setup may be seen as a generalization to a stochastic setting of the seminal 1916 study of Lord Rayleigh. Our results indicate that stochastic forcing at small magnitude has a stabilizing effect, while strong stochastic forcing has a destabilizing effect. The methodology put forth in this article, which combines rigorous analysis with careful computation, also provides an approach to hydrodynamic stability for a variety of systems subject to a large scale stochastic forcing.
0
1
1
0
0
0
Asset Price Bubbles: An Option-based Indicator
We construct a statistical indicator for the detection of short-term asset price bubbles based on the information content of bid and ask market quotes for plain vanilla put and call options. Our construction makes use of the martingale theory of asset price bubbles and the fact that such scenarios where the price for an asset exceeds its fundamental value can in principle be detected by analysis of the asymptotic behavior of the implied volatility surface. For extrapolating this implied volatility, we choose the SABR model, mainly because of its decent fit to real option market quotes for a broad range of maturities and its ease of calibration. As main theoretical result, we show that under lognormal SABR dynamics, we can compute a simple yet powerful closed-form martingale defect indicator by solving an ill-posed inverse calibration problem. In order to cope with the ill-posedness and to quantify the uncertainty which is inherent to such an indicator, we adopt a Bayesian statistical parameter estimation perspective. We probe the resulting posterior densities with a combination of optimization and adaptive Markov chain Monte Carlo methods, thus providing a full-blown uncertainty estimation of all the underlying parameters and the martingale defect indicator. Finally, we provide real-market tests of the proposed option-based indicator with focus on tech stocks due to increasing concerns about a tech bubble 2.0.
0
0
0
0
0
1
Interleaving and Gromov-Hausdorff distance
One of the central notions to emerge from the study of persistent homology is that of interleaving distance. It has found recent applications in symplectic and contact geometry, sheaf theory, computational geometry, and phylogenetics. Here we present a general study of this topic. We define interleaving of functors with common codomain as solutions to an extension problem. In order to define interleaving distance in this setting we are led to categorical generalizations of Hausdorff distance, Gromov-Hausdorff distance, and the space of metric spaces. We obtain comparisons with previous notions of interleaving via the study of future equivalences. As an application we recover a definition of shift equivalences of discrete dynamical systems.
0
0
1
0
0
0
Attainable Knowledge
The article investigates an evidence-based semantics for epistemic logics in which pieces of evidence are interpreted as equivalence relations on the epistemic worlds. It is shown that the properties of knowledge obtained from potentially infinitely many pieces of evidence are described by modal logic S5. At the same time, the properties of knowledge obtained from only a finite number of pieces of evidence are described by modal logic S4. The main technical result is a sound and complete bi-modal logical system that describes properties of these two modalities and their interplay.
1
0
0
0
0
0
Linear complementarity problems on extended second order cones
In this paper, we study the linear complementarity problems on extended second order cones. We convert a linear complementarity problem on an extended second order cone into a mixed complementarity problem on the non-negative orthant. We state necessary and sufficient conditions for a point to be a solution of the converted problem. We also present solution strategies for this problem, such as the Newton method and Levenberg-Marquardt algorithm. Finally, we present some numerical examples.
0
0
1
0
0
0
On the Bernstein-Von Mises Theorem for High Dimensional Nonlinear Bayesian Inverse Problems
We prove a Bernstein-von Mises theorem for a general class of high dimensional nonlinear Bayesian inverse problems in the vanishing noise limit. We propose a sufficient condition on the growth rate of the number of unknown parameters under which the posterior distribution is asymptotically normal. This growth condition is expressed explicitly in terms of the model dimension, the degree of ill-posedness of the inverse problem and the noise parameter. The theoretical results are applied to a Bayesian estimation of the medium parameter in an elliptic problem.
0
0
1
1
0
0
Dynamic Word Embeddings for Evolving Semantic Discovery
Word evolution refers to the changing meanings and associations of words throughout time, as a byproduct of human language evolution. By studying word evolution, we can infer social trends and language constructs over different periods of human history. However, traditional techniques such as word representation learning do not adequately capture the evolving language structure and vocabulary. In this paper, we develop a dynamic statistical model to learn time-aware word vector representation. We propose a model that simultaneously learns time-aware embeddings and solves the resulting "alignment problem". This model is trained on a crawled NYTimes dataset. Additionally, we develop multiple intuitive evaluation strategies of temporal word embeddings. Our qualitative and quantitative tests indicate that our method not only reliably captures this evolution over time, but also consistently outperforms state-of-the-art temporal embedding approaches on both semantic accuracy and alignment quality.
1
0
0
1
0
0
Constructing Words with High Distinct Square Densities
Fraenkel and Simpson showed that the number of distinct squares in a word of length n is bounded from above by 2n, since at most two distinct squares have their rightmost, or last, occurrence begin at each position. Improvements by Ilie to $2n-\Theta(\log n)$ and by Deza et al. to 11n/6 rely on the study of combinatorics of FS-double-squares, when the maximum number of two last occurrences of squares begin. In this paper, we first study how to maximize runs of FS-double-squares in the prefix of a word. We show that for a given positive integer m, the minimum length of a word beginning with m FS-double-squares, whose lengths are equal, is 7m+3. We construct such a word and analyze its distinct-square-sequence as well as its distinct-square-density. We then generalize our construction. We also construct words with high distinct-square-densities that approach 5/6.
1
0
0
0
0
0
Annihilators of Koszul Homologies and Almost Complete Intersections
In this article, we propound a question on the annihilator of Koszul homologies of a system of parameters of an almost complete intersection $R$. The question can be stated in terms of the acyclicity of certain (finite) residual approximation complexes whose $0$-th homologies are the residue field of $R$. We show that our question has an affirmative answer for certain almost complete intersection rings with small multiplicities, as well as for the $1$-th Koszul homology of any almost complete intersection. The statement about the $1$-th Koszul homology is shown to be equivalent to the Monomial Conjecture and thus follows from its validity.
0
0
1
0
0
0
Analyzing Knowledge Transfer in Deep Q-Networks for Autonomously Handling Multiple Intersections
We analyze how the knowledge to autonomously handle one type of intersection, represented as a Deep Q-Network, translates to other types of intersections (tasks). We view intersection handling as a deep reinforcement learning problem, which approximates the state action Q function as a deep neural network. Using a traffic simulator, we show that directly copying a network trained for one type of intersection to another type of intersection decreases the success rate. We also show that when a network that is pre-trained on Task A and then is fine-tuned on a Task B, the resulting network not only performs better on the Task B than an network exclusively trained on Task A, but also retained knowledge on the Task A. Finally, we examine a lifelong learning setting, where we train a single network on five different types of intersections sequentially and show that the resulting network exhibited catastrophic forgetting of knowledge on previous tasks. This result suggests a need for a long-term memory component to preserve knowledge.
1
0
0
0
0
0
From modelling of systems with constraints to generalized geometry and back to numerics
In this note we describe how some objects from generalized geometry appear in the qualitative analysis and numerical simulation of mechanical systems. In particular we discuss double vector bundles and Dirac structures. It turns out that those objects can be naturally associated to systems with constraints -- we recall the mathematical construction in the context of so called implicit Lagrangian systems. We explain how they can be used to produce new numerical methods, that we call Dirac integrators. On a test example of a simple pendulum in a gravity field we compare the Dirac integrators with classical explicit and implicit methods, we pay special attention to conservation of constrains. Then, on a more advanced example of the Ziegler column we show that the choice of numerical methods can indeed affect the conclusions of qualitative analysis of the dynamics of mechanical systems. We also tell why we think that Dirac integrators are appropriate for this kind of systems by explaining the relation with the notions of geometric degree of non-conservativity and kinematic structural stability.
1
0
0
0
0
0
A Deeper Look at Experience Replay
Recently experience replay is widely used in various deep reinforcement learning (RL) algorithms, in this paper we rethink the utility of experience replay. It introduces a new hyper-parameter, the memory buffer size, which needs carefully tuning. However unfortunately the importance of this new hyper-parameter has been underestimated in the community for a long time. In this paper we did a systematic empirical study of experience replay under various function representations. We showcase that a large replay buffer can significantly hurt the performance. Moreover, we propose a simple O(1) method to remedy the negative influence of a large replay buffer. We showcase its utility in both simple grid world and challenging domains like Atari games.
1
0
0
0
0
0
Dirac operators with $W^{1,\infty}$-potential under codimension one collapse
We study the behavior of the spectrum of the Dirac operator together with a symmetric $W^{1, \infty}$-potential on spin manifolds under a collapse of codimension one with bounded sectional curvature and diameter. If there is an induced spin structure on the limit space $N$ then there are convergent eigenvalues which converge to the spectrum of a first order differential operator $D$ on $N$ together with a symmetric $W^{1,\infty}$-potential. If $N$ is orientable and the dimension of the limit space is even then $D$ is the Dirac operator $D^N$ on $N$ and if the dimension of the limit space is odd, then $D = D^N \oplus -D^N$.
0
0
1
0
0
0
Tunable Superconducting Qubits with Flux-Independent Coherence
We have studied the impact of low-frequency magnetic flux noise upon superconducting transmon qubits with various levels of tunability. We find that qubits with weaker tunability exhibit dephasing that is less sensitive to flux noise. This insight was used to fabricate qubits where dephasing due to flux noise was suppressed below other dephasing sources, leading to flux-independent dephasing times T2* ~ 15 us over a tunable range of ~340 MHz. Such tunable qubits have the potential to create high-fidelity, fault-tolerant qubit gates and fundamentally improve scalability for a quantum processor.
0
1
0
0
0
0
Kulish-Sklyanin type models: integrability and reductions
We start with a Riemann-Hilbert problem (RHP) related to a BD.I-type symmetric spaces $SO(2r+1)/S(O(2r-2s +1)\otimes O(2s))$, $s\geq 1$. We consider two Riemann-Hilbert problems: the first formulated on the real axis $\mathbb{R}$ in the complex $\lambda$-plane; the second one is formulated on $\mathbb{R} \oplus i\mathbb{R}$. The first RHP for $s=1$ allows one to solve the Kulish-Sklyanin (KS) model; the second RHP is relevant for a new type of KS model. An important example for nontrivial deep reductions of KS model is given. Its effect on the scattering matrix is formulated. In particular we obtain new 2-component NLS equations. Finally, using the Wronskian relations we demonstrate that the inverse scattering method for KS models may be understood as a generalized Fourier transforms. Thus we have a tool to derive all their fundamental properties, including the hierarchy of equations and the hierarchy of their Hamiltonian structures.
0
1
0
0
0
0
Takiff algebras with polynomial rings of symmetric invariants
Extending results of Rais-Tauvel, Macedo-Savage, and Arakawa-Premet, we prove that under mild restrictions on the Lie algebra $\mathfrak q$ having the polynomial ring of symmetric invariants, the m-th Takiff algebra of $\mathfrak q$, $\mathfrak q\langle m\rangle$, also has a polynomial ring of symmetric invariants.
0
0
1
0
0
0
Can We Prove Time Protection?
Timing channels are a significant and growing security threat in computer systems, with no established solution. We have recently argued that the OS must provide time protection, in analogy to the established memory protection, to protect applications from information leakage through timing channels. Based on a recently-proposed implementation of time protection in the seL4 microkernel, we investigate how such an implementation could be formally proved to prevent timing channels. We postulate that this should be possible by reasoning about a highly abstracted representation of the shared hardware resources that cause timing channels.
1
0
0
0
0
0
On some Graphs with a Unique Perfect Matching
We show that deciding whether a given graph $G$ of size $m$ has a unique perfect matching as well as finding that matching, if it exists, can be done in time $O(m)$ if $G$ is either a cograph, or a split graph, or an interval graph, or claw-free. Furthermore, we provide a constructive characterization of the claw-free graphs with a unique perfect matching.
1
0
0
0
0
0
Trade-Offs in Stochastic Event-Triggered Control
This paper studies the optimal output-feedback control of a linear time-invariant system where a stochastic event-based scheduler triggers the communication between the sensor and the controller. The primary goal of the use of this type of scheduling strategy is to provide significant reductions in the usage of the sensor-to-controller communication and, in turn, improve energy expenditure in the network. In this paper, we aim to design an admissible control policy, which is a function of the observed output, to minimize a quadratic cost function while employing a stochastic event-triggered scheduler that preserves the Gaussian property of the plant state and the estimation error. For the infinite horizon case, we present analytical expressions that quantify the trade-off between the communication cost and control performance of such event-triggered control systems. This trade-off is confirmed quantitatively via numerical examples.
1
0
0
0
0
0
Third Harmonic THz Generation from Graphene in a Parallel-Plate Waveguide
Graphene as a zero-bandgap two-dimensional semiconductor with a linear electron band dispersion near the Dirac points has the potential to exhibit very interesting nonlinear optical properties. In particular, third harmonic generation of terahertz radiation should occur due to the nonlinear relationship between the crystal momentum and the current density. In this work, we investigate the terahertz nonlinear response of graphene inside a parallel-plate waveguide. We optimize the plate separation and Fermi energy of the graphene to maximize third harmonic generation, by maximizing the nonlinear interaction while minimizing the loss and phase mismatch. The results obtained show an increase by more than a factor of 100 in the power efficiency relative to a normal-incidence configuration for a 2 terahertz incident field.
0
1
0
0
0
0
Taming the Signal-to-Noise Problem in Lattice QCD by Phase Reweighting
Path integrals describing quantum many-body systems can be calculated with Monte Carlo sampling techniques, but average quantities are often subject to signal-to-noise ratios that degrade exponentially with time. A phase-reweighting technique inspired by recent observations of random walk statistics in correlation functions is proposed that allows energy levels to be extracted from late-time correlation functions with time-independent signal-to-noise ratios. Phase reweighting effectively includes dynamical refinement of source magnitudes but introduces a bias associated with the phase. This bias can be removed by performing an extrapolation, but at the expense of re-introducing a signal-to-noise problem. Lattice Quantum Chromodynamics calculations of the $\rho$ and nucleon masses and of the $\Xi\Xi$ binding energy show consistency between standard results obtained using earlier-time correlation functions and phase-reweighted results using late-time correlation functions inaccessible to standard statistical analysis methods.
0
1
0
0
0
0
Description of radiation damage in diamond sensors using an effective defect model
The BCML system is a beam monitoring device in the CMS experiment at the LHC. As detectors poly-crystalline diamond sensors are used. Here high particle rates occur from the colliding beams scattering particles outside the beam pipe. These particles cause defects, which act as traps for the ionization, thus reducing the CCE. However, the loss in CCE was much more severe than expected. The reason why in real experiments the CCE is so much worse than in laboratory experiments is related to the rate of incident particles. At high particle rates the trapping rate of the ionization is so high compared with the detrapping rate, that space charge builds up. This space charge reduces locally the internal electric field, which in turn increases the trapping rate and hence reduces the CCE even further. In order to connect these macroscopic measurements with the microscopic defects acting as traps for the ionization charge the TCAD simulation program SILVACO was used. Two effective acceptor and donor levels were needed to fit the data. Using this effective defect model the highly non- linear rate dependent diamond polarization as function of the particle rate environment and the resulting signal loss could be simulated.
0
1
0
0
0
0
Design and Analysis of Time-Invariant SC-LDPC Convolutional Codes With Small Constraint Length
In this paper, we deal with time-invariant spatially coupled low-density parity-check convolutional codes (SC-LDPC-CCs). Classic design approaches usually start from quasi-cyclic low-density parity-check (QC-LDPC) block codes and exploit suitable unwrapping procedures to obtain SC-LDPC-CCs. We show that the direct design of the SC-LDPC-CCs syndrome former matrix or, equivalently, the symbolic parity-check matrix, leads to codes with smaller syndrome former constraint lengths with respect to the best solutions available in the literature. We provide theoretical lower bounds on the syndrome former constraint length for the most relevant families of SC-LDPC-CCs, under constraints on the minimum length of cycles in their Tanner graphs. We also propose new code design techniques that approach or achieve such theoretical limits.
1
0
0
0
0
0
Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network
The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring the space of possible transformations is intractable. The current solution utilizes reaction templates to limit the space, but it suffers from coverage and efficiency issues. In this paper, we propose a template-free approach to efficiently explore the space of product molecules by first pinpointing the reaction center -- the set of nodes and edges where graph edits occur. Since only a small number of atoms contribute to reaction center, we can directly enumerate candidate products. The generated candidates are scored by a Weisfeiler-Lehman Difference Network that models high-order interactions between changes occurring at nodes across the molecule. Our framework outperforms the top-performing template-based approach with a 10\% margin, while running orders of magnitude faster. Finally, we demonstrate that the model accuracy rivals the performance of domain experts.
1
0
0
1
0
0
Legendre curves and singularities of a ruled surface according to rotation minimizing frame
In this paper, Legendre curves on unit tangent bundle are given using rotation minimizing (RM) vector fields. Ruled surfaces corresponding to these curves are represented. Singularities of these ruled surfaces are also analyzed and classifed.
0
0
1
0
0
0
Field-free nucleation of antivortices and giant vortices in non-superconducting materials
Giant vortices with higher phase-winding than $2\pi$ are usually energetically unfavorable, but geometric symmetry constraints on a superconductor in a magnetic field are known to stabilize such objects. Here, we show via microscopic calculations that giant vortices can appear in intrinsically non-superconducting materials, even without any applied magnetic field. The enabling mechanism is the proximity effect to a host superconductor where a current flows, and we also demonstrate that antivortices can appear in this setup. Our results open the possibility to study electrically controllable topological defects in unusual environments, which do not have to be exposed to magnetic fields or intrinsically superconducting, but instead display other types of order.
0
1
0
0
0
0
Quantum criticality in photorefractive optics: vortices in laser beams and antiferromagnets
We study vortex patterns in a prototype nonlinear optical system: counterpropagating laser beams in a photorefractive crystal, with or without the background photonic lattice. The vortices are effectively planar and described by the winding number and the "flavor" index, stemming from the fact that we have two parallel beams propagating in opposite directions. The problem is amenable to the methods of statistical field theory and generalizes the Berezinsky-Kosterlitz-Thouless transition of the XY model to the "two-flavor" case. In addition to the familiar conductor and insulator phases, we also have the perfect conductor (vortex proliferation in both beams/"flavors") and the frustrated insulator (energy costs of vortex proliferation and vortex annihilation balance each other). In the presence of disorder in the background lattice, a novel phase appears which shows long-range correlations and absence of long-range order, thus being analogous to spin glasses. An important benefit of this approach is that qualitative behavior of patterns can be known without intensive numerical work over large areas of the parameter space. More generally, we would like to draw attention to connections between the (classical) pattern-forming systems in photorefractive optics and the methods of (quantum) condensed matter and field theory: on one hand, we use the field-theoretical methods (renormalization group, replica formalism) to analyze the patterns; on the other hand, the observed phases are analogous to those seen in magnetic systems, and make photorefractive optics a fruitful testing ground for condensed matter systems. As an example, we map our system to a doped $O(3)$ antiferromagnet with $\mathbb{Z}_2$ defects, which has the same structure of the phase diagram.
0
1
0
0
0
0
Burn-In Demonstrations for Multi-Modal Imitation Learning
Recent work on imitation learning has generated policies that reproduce expert behavior from multi-modal data. However, past approaches have focused only on recreating a small number of distinct, expert maneuvers, or have relied on supervised learning techniques that produce unstable policies. This work extends InfoGAIL, an algorithm for multi-modal imitation learning, to reproduce behavior over an extended period of time. Our approach involves reformulating the typical imitation learning setting to include "burn-in demonstrations" upon which policies are conditioned at test time. We demonstrate that our approach outperforms standard InfoGAIL in maximizing the mutual information between predicted and unseen style labels in road scene simulations, and we show that our method leads to policies that imitate expert autonomous driving systems over long time horizons.
1
0
0
1
0
0
Elliptic Determinantal Processes and Elliptic Dyson Models
We introduce seven families of stochastic systems of interacting particles in one-dimension corresponding to the seven families of irreducible reduced affine root systems. We prove that they are determinantal in the sense that all spatio-temporal correlation functions are given by determinants controlled by a single function called the spatio-temporal correlation kernel. For the four families ${A}_{N-1}$, ${B}_N$, ${C}_N$ and ${D}_N$, we identify the systems of stochastic differential equations solved by these determinantal processes, which will be regarded as the elliptic extensions of the Dyson model. Here we use the notion of martingales in probability theory and the elliptic determinant evaluations of the Macdonald denominators of irreducible reduced affine root systems given by Rosengren and Schlosser.
0
1
1
0
0
0
Searching for chemical signatures of brown dwarf formation
Recent studies have shown that close-in brown dwarfs in the mass range 35-55 M$_{\rm Jup}$ are almost depleted as companions to stars, suggesting that objects with masses above and below this gap might have different formation mechanisms. We determine the fundamental stellar parameters, as well as individual abundances for a large sample of stars known to have a substellar companion in the brown dwarf regime. The sample is divided into stars hosting "massive" and "low-mass" brown dwarfs. Following previous works a threshold of 42.5 M$_{\rm Jup}$ was considered. Our results confirm that stars with brown dwarf companions do not follow the well-established gas-giant planet metallicity correlation seen in main-sequence planet hosts. Stars harbouring "massive" brown dwarfs show similar metallicity and abundance distribution as stars without known planets or with low-mass planets. We find a tendency of stars harbouring "less-massive" brown dwarfs of having slightly larger metallicity, [X$_{\rm Fe}$/Fe] values, and abundances of Sc II, Mn I, and Ni I in comparison with the stars having the massive brown dwarfs. The data suggest, as previously reported, that massive and low-mass brown dwarfs might present differences in period and eccentricity. We find evidence of a non-metallicity dependent mechanism for the formation of massive brown dwarfs. Our results agree with a scenario in which massive brown dwarfs are formed as stars. At high-metallicities, the core-accretion mechanism might become efficient in the formation of low-mass brown dwarfs while at lower metallicities low-mass brown dwarfs could form by gravitational instability in turbulent protostellar discs.
0
1
0
0
0
0
The Klein Paradox: A New Treatment
The Dirac equation requires a treatment of the step potential that differs fundamentally from the traditional treatment, because the Dirac plane waves, besides momentum and spin, are characterized by a quantum number with the physical meaning of sign of charge. Since the Hermitean operator corresponding to this quantum number does not commute with the step potential, the time displacement parameter used in the ansatz of the stationary state does not have the physical meaning of energy. Therefore there are no paradoxal values of the energy. The new solution of the Dirac equation with a step potential is obtained. This solution, again, allows for phenomena of the Klein paradox type, but in addition it contains a positron amplitude localized at the threshold point of the step potential.
0
1
0
0
0
0
What are the most important factors that influence the changes in London Real Estate Prices? How to quantify them?
In recent years, real estate industry has captured government and public attention around the world. The factors influencing the prices of real estate are diversified and complex. However, due to the limitations and one-sidedness of their respective views, they did not provide enough theoretical basis for the fluctuation of house price and its influential factors. The purpose of this paper is to build a housing price model to make the scientific and objective analysis of London's real estate market trends from the year 1996 to 2016 and proposes some countermeasures to reasonably control house prices. Specifically, the paper analyzes eight factors which affect the house prices from two aspects: housing supply and demand and find out the factor which is of vital importance to the increase of housing price per square meter. The problem of a high level of multicollinearity between them is solved by using principal components analysis.
0
0
0
1
0
1
A Multi-Wavelength Analysis of Dust and Gas in the SR 24S Transition Disk
We present new Atacama Large Millimeter/sub-millimeter Array (ALMA) 1.3 mm continuum observations of the SR 24S transition disk with an angular resolution $\lesssim0.18"$ (12 au radius). We perform a multi-wavelength investigation by combining new data with previous ALMA data at 0.45 mm. The visibilities and images of the continuum emission at the two wavelengths are well characterized by a ring-like emission. Visibility modeling finds that the ring-like emission is narrower at longer wavelengths, in good agreement with models of dust trapping in pressure bumps, although there are complex residuals that suggest potentially asymmetric structures. The 0.45 mm emission has a shallower profile inside the central cavity than the 1.3 mm emission. In addition, we find that the $^{13}$CO and C$^{18}$O (J=2-1) emission peaks at the center of the continuum cavity. We do not detect either continuum or gas emission from the northern companion to this system (SR 24N), which is itself a binary system. The upper limit for the dust disk mass of SR 24N is $\lesssim 0.12\,M_{\bigoplus}$, which gives a disk mass ratio in dust between the two components of $M_{\mathrm{dust, SR\,24S}}/M_{\mathrm{dust, SR\,24N}}\gtrsim840$. The current ALMA observations may imply that either planets have already formed in the SR 24N disk or that dust growth to mm-sizes is inhibited there and that only warm gas, as seen by ro-vibrational CO emission inside the truncation radii of the binary, is present.
0
1
0
0
0
0
Step bunching with both directions of the current: Vicinal W(110) surfaces versus atomistic scale model
We report for the first time the observation of bunching of monoatomic steps on vicinal W(110) surfaces induced by step up or step down currents across the steps. Measurements reveal that the size scaling exponent {\gamma}, connecting the maximal slope of a bunch with its height, differs depending on the current direction. We provide a numerical perspective by using an atomistic scale model with a conserved surface flux to mimic experimental conditions, and also for the first time show that there is an interval of parameters in which the vicinal surface is unstable against step bunching for both directions of the adatom drift.
0
1
0
0
0
0
Causal Bandits with Propagating Inference
Bandit is a framework for designing sequential experiments. In each experiment, a learner selects an arm $A \in \mathcal{A}$ and obtains an observation corresponding to $A$. Theoretically, the tight regret lower-bound for the general bandit is polynomial with respect to the number of arms $|\mathcal{A}|$. This makes bandit incapable of handling an exponentially large number of arms, hence the bandit problem with side-information is often considered to overcome this lower bound. Recently, a bandit framework over a causal graph was introduced, where the structure of the causal graph is available as side-information. A causal graph is a fundamental model that is frequently used with a variety of real problems. In this setting, the arms are identified with interventions on a given causal graph, and the effect of an intervention propagates throughout all over the causal graph. The task is to find the best intervention that maximizes the expected value on a target node. Existing algorithms for causal bandit overcame the $\Omega(\sqrt{|\mathcal{A}|/T})$ simple-regret lower-bound; however, their algorithms work only when the interventions $\mathcal{A}$ are localized around a single node (i.e., an intervention propagates only to its neighbors). We propose a novel causal bandit algorithm for an arbitrary set of interventions, which can propagate throughout the causal graph. We also show that it achieves $O(\sqrt{ \gamma^*\log(|\mathcal{A}|T) / T})$ regret bound, where $\gamma^*$ is determined by using a causal graph structure. In particular, if the in-degree of the causal graph is bounded, then $\gamma^* = O(N^2)$, where $N$ is the number $N$ of nodes.
0
0
0
1
0
0
The 2D Tree Sliding Window Discrete Fourier Transform
We present a new algorithm for the 2D Sliding Window Discrete Fourier Transform (SWDFT). Our algorithm avoids repeating calculations in overlapping windows by storing them in a tree data-structure based on the ideas of the Cooley- Tukey Fast Fourier Transform (FFT). For an $N_0 \times N_1$ array and $n_0 \times n_1$ windows, our algorithm takes $O(N_0 N_1 n_0 n_1)$ operations. We provide a C implementation of our algorithm for the Radix-2 case, compare ours with existing algorithms, and show how our algorithm easily extends to higher dimensions.
1
0
0
1
0
0
Towards security defect prediction with AI
In this study, we investigate the limits of the current state of the art AI system for detecting buffer overflows and compare it with current static analysis tools. To do so, we developed a code generator, s-bAbI, capable of producing an arbitrarily large number of code samples of controlled complexity. We found that the static analysis engines we examined have good precision, but poor recall on this dataset, except for a sound static analyzer that has good precision and recall. We found that the state of the art AI system, a memory network modeled after Choi et al. [1], can achieve similar performance to the static analysis engines, but requires an exhaustive amount of training data in order to do so. Our work points towards future approaches that may solve these problems; namely, using representations of code that can capture appropriate scope information and using deep learning methods that are able to perform arithmetic operations.
0
0
0
1
0
0
Pseudo-Recursal: Solving the Catastrophic Forgetting Problem in Deep Neural Networks
In general, neural networks are not currently capable of learning tasks in a sequential fashion. When a novel, unrelated task is learnt by a neural network, it substantially forgets how to solve previously learnt tasks. One of the original solutions to this problem is pseudo-rehearsal, which involves learning the new task while rehearsing generated items representative of the previous task/s. This is very effective for simple tasks. However, pseudo-rehearsal has not yet been successfully applied to very complex tasks because in these tasks it is difficult to generate representative items. We accomplish pseudo-rehearsal by using a Generative Adversarial Network to generate items so that our deep network can learn to sequentially classify the CIFAR-10, SVHN and MNIST datasets. After training on all tasks, our network loses only 1.67% absolute accuracy on CIFAR-10 and gains 0.24% absolute accuracy on SVHN. Our model's performance is a substantial improvement compared to the current state of the art solution.
0
0
0
1
0
0
Multivariate Hadamard self-similarity: testing fractal connectivity
While scale invariance is commonly observed in each component of real world multivariate signals, it is also often the case that the inter-component correlation structure is not fractally connected, i.e., its scaling behavior is not determined by that of the individual components. To model this situation in a versatile manner, we introduce a class of multivariate Gaussian stochastic processes called Hadamard fractional Brownian motion (HfBm). Its theoretical study sheds light on the issues raised by the joint requirement of entry-wise scaling and departures from fractal connectivity. An asymptotically normal wavelet-based estimator for its scaling parameter, called the Hurst matrix, is proposed, as well as asymptotically valid confidence intervals. The latter are accompanied by original finite sample procedures for computing confidence intervals and testing fractal connectivity from one single and finite size observation. Monte Carlo simulation studies are used to assess the estimation performance as a function of the (finite) sample size, and to quantify the impact of omitting wavelet cross-correlation terms. The simulation studies are shown to validate the use of approximate confidence intervals, together with the significance level and power of the fractal connectivity test. The test performance and properties are further studied as functions of the HfBm parameters.
0
0
1
1
0
0
Efficient barycentric point sampling on meshes
We present an easy-to-implement and efficient analytical inversion algorithm for the unbiased random sampling of a set of points on a triangle mesh whose surface density is specified by barycentric interpolation of non-negative per-vertex weights. The correctness of the inversion algorithm is verified via statistical tests, and we show that it is faster on average than rejection sampling.
1
0
0
0
0
0
Femtosecond Optical Superregular Breathers
Superregular (SR) breathers are nonlinear wave structures formed by a unique nonlinear superposition of pairs of quasi-Akhmediev breathers. They describe a complete scenario of modulation instability that develops from localized small perturbations as well as an unusual quasiannihilation of breather collision. Here, we demonstrate that femtosecond optical SR breathers in optical fibers exhibit intriguing half-transition and full-suppression states, which are absent in the picosecond regime governed by the standard nonlinear Schrödinger equation. In particular, the full-suppression mode, which is strictly associated with the regime of vanishing growth rate of modulation instability, reveals a crucial \textit{non-amplifying} nonlinear dynamics of localized small perturbations. We numerically confirm the robustness of such different SR modes excited from ideal and nonideal initial states in both integrable and nonintegrable cases.
0
1
0
0
0
0
Diversity, Topology, and the Risk of Node Re-identification in Labeled Social Graphs
Real network datasets provide significant benefits for understanding phenomena such as information diffusion or network evolution. Yet the privacy risks raised from sharing real graph datasets, even when stripped of user identity information, are significant. When nodes have associated attributes, the privacy risks increase. In this paper we quantitatively study the impact of binary node attributes on node privacy by employing machine-learning-based re-identification attacks and exploring the interplay between graph topology and attribute placement. Our experiments show that the population's diversity on the binary attribute consistently degrades anonymity.
1
0
0
0
0
0
Braid group action and root vectors for the $q$-Onsager algebra
We define two algebra automorphisms $T_0$ and $T_1$ of the $q$-Onsager algebra $B_c$, which provide an analog of G. Lusztig's braid group action for quantum groups. These automorphisms are used to define root vectors which give rise to a PBW basis for $B_c$. We show that the root vectors satisfy $q$-analogs of Onsager's original commutation relations. The paper is much inspired by I. Damiani's construction and investigation of root vectors for the quantized enveloping algebra of $\widehat{\mathfrak{sl}}_2$.
0
0
1
0
0
0
Underapproximation of Reach-Avoid Sets for Discrete-Time Stochastic Systems via Lagrangian Methods
We examine Lagrangian techniques for computing underapproximations of finite-time horizon, stochastic reach-avoid level-sets for discrete-time, nonlinear systems. We use the concept of reachability of a target tube in the control literature to define robust reach-avoid sets which are parameterized by the target set, safe set, and the set in which the disturbance is drawn from. We unify two existing Lagrangian approaches to compute these sets and establish that there exists an optimal control policy of the robust reach-avoid sets which is a Markov policy. Based on these results, we characterize the subset of the disturbance space whose corresponding robust reach-avoid set for the given target and safe set is a guaranteed underapproximation of the stochastic reach-avoid level-set of interest. The proposed approach dramatically improves the computational efficiency for obtaining an underapproximation of stochastic reach-avoid level-sets when compared to the traditional approaches based on gridding. Our method, while conservative, does not rely on a grid, implying scalability as permitted by the known computational geometry constraints. We demonstrate the method on two examples: a simple two-dimensional integrator, and a space vehicle rendezvous-docking problem.
1
0
1
0
0
0
A Domain-Specific Language and Editor for Parallel Particle Methods
Domain-specific languages (DSLs) are of increasing importance in scientific high-performance computing to reduce development costs, raise the level of abstraction and, thus, ease scientific programming. However, designing and implementing DSLs is not an easy task, as it requires knowledge of the application domain and experience in language engineering and compilers. Consequently, many DSLs follow a weak approach using macros or text generators, which lack many of the features that make a DSL a comfortable for programmers. Some of these features---e.g., syntax highlighting, type inference, error reporting, and code completion---are easily provided by language workbenches, which combine language engineering techniques and tools in a common ecosystem. In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL and development environment for numerical simulations based on particle methods and hybrid particle-mesh methods. PPME uses the meta programming system (MPS), a projectional language workbench. PPME is the successor of the Parallel Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional implementation strategies. We analyze and compare both languages and demonstrate how the programmer's experience can be improved using static analyses and projectional editing. Furthermore, we present an explicit domain model for particle abstractions and the first formal type system for particle methods.
1
0
0
0
0
0
Manifold regularization based on Nystr{ö}m type subsampling
In this paper, we study the Nystr{ö}m type subsampling for large scale kernel methods to reduce the computational complexities of big data. We discuss the multi-penalty regularization scheme based on Nystr{ö}m type subsampling which is motivated from well-studied manifold regularization schemes. We develop a theoretical analysis of multi-penalty least-square regularization scheme under the general source condition in vector-valued function setting, therefore the results can also be applied to multi-task learning problems. We achieve the optimal minimax convergence rates of multi-penalty regularization using the concept of effective dimension for the appropriate subsampling size. We discuss an aggregation approach based on linear function strategy to combine various Nystr{ö}m approximants. Finally, we demonstrate the performance of multi-penalty regularization based on Nystr{ö}m type subsampling on Caltech-101 data set for multi-class image classification and NSL-KDD benchmark data set for intrusion detection problem.
1
0
0
1
0
0
Path integral molecular dynamics with surface hopping for thermal equilibrium sampling of nonadiabatic systems
In this work, a novel ring polymer representation for multi-level quantum system is proposed for thermal average calculations. The proposed presentation keeps the discreteness of the electronic states: besides position and momentum, each bead in the ring polymer is also characterized by a surface index indicating the electronic energy surface. A path integral molecular dynamics with surface hopping (PIMD-SH) dynamics is also developed to sample the equilibrium distribution of ring polymer configurational space. The PIMD-SH sampling method is validated theoretically and by numerical examples.
0
1
1
0
0
0
The interplay of the collisionless nonlinear thin-shell instability with the ion acoustic instability
The nonlinear thin-shell instability (NTSI) may explain some of the turbulent hydrodynamic structures that are observed close to the collision boundary of energetic astrophysical outflows. It develops in nonplanar shells that are bounded on either side by a hydrodynamic shock, provided that the amplitude of the seed oscillations is sufficiently large. The hydrodynamic NTSI has a microscopic counterpart in collisionless plasma. A sinusoidal displacement of a thin shell, which is formed by the collision of two clouds of unmagnetized electrons and protons, grows and saturates on timescales of the order of the inverse proton plasma frequency. Here we increase the wavelength of the seed perturbation by a factor 4 compared to that in a previous study. Like in the case of the hydrodynamic NTSI, the increase in the wavelength reduces the growth rate of the microscopic NTSI. The prolonged growth time of the microscopic NTSI allows the waves, which are driven by the competing ion acoustic instability, to grow to a large amplitude before the NTSI saturates and they disrupt the latter. The ion acoustic instability thus imposes a limit on the largest wavelength that can be destabilized by the NTSI in collisionless plasma. The limit can be overcome by binary collisions. We bring forward evidence for an overstability of the collisionless NTSI.
0
1
0
0
0
0
Multiple Scaled Contaminated Normal Distribution and Its Application in Clustering
The multivariate contaminated normal (MCN) distribution represents a simple heavy-tailed generalization of the multivariate normal (MN) distribution to model elliptical contoured scatters in the presence of mild outliers, referred to as "bad" points. The MCN can also automatically detect bad points. The price of these advantages is two additional parameters, both with specific and useful interpretations: proportion of good observations and degree of contamination. However, points may be bad in some dimensions but good in others. The use of an overall proportion of good observations and of an overall degree of contamination is limiting. To overcome this limitation, we propose a multiple scaled contaminated normal (MSCN) distribution with a proportion of good observations and a degree of contamination for each dimension. Once the model is fitted, each observation has a posterior probability of being good with respect to each dimension. Thanks to this probability, we have a method for simultaneous directional robust estimation of the parameters of the MN distribution based on down-weighting and for the automatic directional detection of bad points by means of maximum a posteriori probabilities. The term "directional" is added to specify that the method works separately for each dimension. Mixtures of MSCN distributions are also proposed as an application of the proposed model for robust clustering. An extension of the EM algorithm is used for parameter estimation based on the maximum likelihood approach. Real and simulated data are used to show the usefulness of our mixture with respect to well-established mixtures of symmetric distributions with heavy tails.
0
0
0
1
0
0
The Galaxy-Halo Connection Over The Last 13.3 Gyrs
We present new determinations of the stellar-to-halo mass relation (SHMR) at $z=0-10$ that match the evolution of the galaxy stellar mass function, the SFR$-M_*$ relation,and the cosmic star formation rate. We utilize a compilation of 40 observational studies from the literature and correct them for potential biases. Using our robust determinations of halo mass assembly and the SHMR, we infer star formation histories, merger rates, and structural properties for average galaxies, combining star-forming and quenched galaxies. Our main findings: (1) The halo mass $M_{50}$ above which 50\% of galaxies are quenched coincides with sSFR/sMAR$\sim1$, where sMAR is the specific halo mass accretion rate. (2) $M_{50}$ increases with redshift, presumably due to cold streams being more efficient at high redshift while virial shocks and AGN feedback become more relevant at lower redshifts. (3) The ratio sSFR/sMAR has a peak value, which occurs around $M_{\rm vir}\sim2\times10^{11}M_{\odot}$. (4) The stellar mass density within 1 kpc, $\Sigma_1$, is a good indicator of the galactic global sSFR. (5) Galaxies are statistically quenched after they reach a maximum in $\Sigma_1$, consistent with theoretical expectations of the gas compaction model; this maximum depends on redshift. (6) In-situ star formation is responsible for most galactic stellar mass growth, especially for lower-mass galaxies. (7) Galaxies grow inside out. The marked change in the slope of the size--mass relation when galaxies became quenched, from $d\log R_{\rm eff}/d\log M_*\sim0.35$ to $\sim2.5$, could be the result of dry minor mergers.
0
1
0
0
0
0
State of the art of Trust and Reputation Systems in E-Commerce Context
This article proposes in depth comparative study of the most popular, used and analyzed Trust and Reputation System (TRS) according to the trust and reputation literature and in terms of specific trustworthiness criteria. This survey is realized relying on a selection of trustworthiness criteria that analyze and evaluate the maturity and effectiveness of TRS. These criteria describe the utility, the usability, the performance and the effectiveness of the TRS. We also provide a summary table of the compared TRS within a detailed and granular selection of trust and reputation aspects.
1
0
0
0
0
0
Placing your Coins on a Shelf
We consider the problem of packing a family of disks "on a shelf", that is, such that each disk touches the $x$-axis from above and such that no two disks overlap. We prove that the problem of minimizing the distance between the leftmost point and the rightmost point of any disk is NP-hard. On the positive side, we show how to approximate this problem within a factor of 4/3 in $O(n \log n)$ time, and provide an $O(n \log n)$-time exact algorithm for a special case, in particular when the ratio between the largest and smallest radius is at most four.
1
0
1
0
0
0
Randomized Rumor Spreading in Ad Hoc Networks with Buffers
The randomized rumor spreading problem generates a big interest in the area of distributed algorithms due to its simplicity, robustness and wide range of applications. The two most popular communication paradigms used for spreading the rumor are Push and Pull algorithms. The former protocol allows nodes to send the rumor to a randomly selected neighbor at each step, while the latter is based on sending a request and downloading the rumor from a randomly selected neighbor, provided the neighbor has it. Previous analysis of these protocols assumed that every node could process all such push/pull operations within a single step, which could be unrealistic in practical situations. Therefore we propose a new framework for analysis rumor spreading accommodating buffers, in which a node can process only one push/pull message or push request at a time. We develop upper and lower bounds for randomized rumor spreading time in the new framework, and compare the results with analogous in the old framework without buffers.
1
0
0
0
0
0
A simple and efficient feedback control strategy for wastewater denitrification
Due to severe mathematical modeling and calibration difficulties open-loop feedforward control is mainly employed today for wastewater denitrification, which is a key ecological issue. In order to improve the resulting poor performances a new model-free control setting and its corresponding "intelligent" controller are introduced. The pitfall of regulating two output variables via a single input variable is overcome by introducing also an open-loop knowledge-based control deduced from the plant behavior. Several convincing computer simulations are presented and discussed.
1
0
1
0
0
0
The interaction of Airy waves and solitons in the three-wave system
We employ the generic three-wave system, with the $\chi ^{(2)}$ interaction between two components of the fundamental-frequency (FF) wave and second-harmonic (SH) one, to consider collisions of truncated Airy waves (TAWs) and three-wave solitons in a setting which is not available in other nonlinear systems. The advantage is that the single-wave TAWs, carried by either one of the FF component, are not distorted by the nonlinearity and are stable, three-wave solitons being stable too in the same system. The collision between mutually symmetric TAWs, carried by the different FF components, transforms them into a set of solitons, the number of which decreases with the increase of the total power. The TAW absorbs an incident small-power soliton, and a high-power soliton absorbs the TAW. Between these limits, the collision with an incident soliton converts the TAW into two solitons, with a remnant of the TAW attached to one of them, or leads to formation of a complex TAW-soliton bound state. At large velocities, the collisions become quasi-elastic.
0
1
0
0
0
0
A penalty criterion for score forecasting in soccer
This note proposes a penalty criterion for assessing correct score forecasting in a soccer match. The penalty is based on hierarchical priorities for such a forecast i.e., i) Win, Draw and Loss exact prediction and ii) normalized Euclidian distance between actual and forecast scores. The procedure is illustrated on typical scores, and different alternatives on the penalty components are discussed.
0
0
0
1
0
0
A robust RUV-testing procedure via gamma-divergence
Identification of differentially expressed genes (DE-genes) is commonly conducted in modern biomedical researches. However, unwanted variation inevitably arises during the data collection process, which could make the detection results heavily biased. It is suggested to remove the unwanted variation while keeping the biological variation to ensure a reliable analysis result. Removing Unwanted Variation (RUV) is recently proposed for this purpose by the virtue of negative control genes. On the other hand, outliers are frequently appear in modern high-throughput genetic data that can heavily affect the performances of RUV and its downstream analysis. In this work, we propose a robust RUV-testing procedure via gamma-divergence. The advantages of our method are twofold: (1) it does not involve any modeling for the outlier distribution, which is applicable to various situations, (2) it is easy to implement in the sense that its robustness is controlled by a single tuning parameter gamma of gamma-divergence, and a data-driven criterion is developed to select $\gamma$. In the Gender Study, our method can successfully remove unwanted variation, and is able to identify more DE-genes than conventional methods.
0
0
0
1
0
0
On the Distribution, Model Selection Properties and Uniqueness of the Lasso Estimator in Low and High Dimensions
We derive expressions for the finite-sample distribution of the Lasso estimator in the context of a linear regression model with normally distributed errors in low as well as in high dimensions by exploiting the structure of the optimization problem defining the estimator. In low dimensions we assume full rank of the regressor matrix and present expressions for the cumulative distribution function as well as the densities of the absolutely continuous parts of the estimator. Additionally, we establish an explicit formula for the correspondence between the Lasso and the least-squares estimator. We derive analogous results for the distribution in less explicit form in high dimensions where we make no assumptions on the regressor matrix at all. In this setting, we also investigate the model selection properties of the Lasso and show that possibly only a subset of models might be selected by the estimator, completely independently of the observed response vector. Finally, we present a condition for uniqueness of the estimator that is necessary as well as sufficient.
0
0
1
1
0
0
Transductive Boltzmann Machines
We present transductive Boltzmann machines (TBMs), which firstly achieve transductive learning of the Gibbs distribution. While exact learning of the Gibbs distribution is impossible by the family of existing Boltzmann machines due to combinatorial explosion of the sample space, TBMs overcome the problem by adaptively constructing the minimum required sample space from data to avoid unnecessary generalization. We theoretically provide bias-variance decomposition of the KL divergence in TBMs to analyze its learnability, and empirically demonstrate that TBMs are superior to the fully visible Boltzmann machines and popularly used restricted Boltzmann machines in terms of efficiency and effectiveness.
0
0
0
1
0
0
Human-Robot Trust Integrated Task Allocation and Symbolic Motion planning for Heterogeneous Multi-robot Systems
This paper presents a human-robot trust integrated task allocation and motion planning framework for multi-robot systems (MRS) in performing a set of tasks concurrently. A set of task specifications in parallel are conjuncted with MRS to synthesize a task allocation automaton. Each transition of the task allocation automaton is associated with the total trust value of human in corresponding robots. Here, the human-robot trust model is constructed with a dynamic Bayesian network (DBN) by considering individual robot performance, safety coefficient, human cognitive workload and overall evaluation of task allocation. Hence, a task allocation path with maximum encoded human-robot trust can be searched based on the current trust value of each robot in the task allocation automaton. Symbolic motion planning (SMP) is implemented for each robot after they obtain the sequence of actions. The task allocation path can be intermittently updated with this DBN based trust model. The overall strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask automata.
1
0
0
0
0
0
A five-decision testing procedure to infer on unidimensional parameter
A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some pre-specified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser's directional two-sided test as well as the more recently introduced Jones and Tukey's testing procedure involve three possible decisions to infer on unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g. that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are however situations where a point hypothesis is indeed plausible, for example when considering hypotheses derived from Einstein's theories. In this article, we introduce a five-decision rule testing procedure, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and of Jones and Tukey (higher power), allowing for a non-negligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.
0
0
1
1
0
0
PubMed 200k RCT: a Dataset for Sequential Sentence Classification in Medical Abstracts
We present PubMed 200k RCT, a new dataset based on PubMed for sequential sentence classification. The dataset consists of approximately 200,000 abstracts of randomized controlled trials, totaling 2.3 million sentences. Each sentence of each abstract is labeled with their role in the abstract using one of the following classes: background, objective, method, result, or conclusion. The purpose of releasing this dataset is twofold. First, the majority of datasets for sequential short-text classification (i.e., classification of short texts that appear in sequences) are small: we hope that releasing a new large dataset will help develop more accurate algorithms for this task. Second, from an application perspective, researchers need better tools to efficiently skim through the literature. Automatically classifying each sentence in an abstract would help researchers read abstracts more efficiently, especially in fields where abstracts may be long, such as the medical field.
1
0
0
1
0
0
Convergence radius of perturbative Lindblad driven non-equilibrium steady states
We address the problem of analyzing the radius of convergence of perturbative expansion of non-equilibrium steady states of Lindblad driven spin chains. A simple formal approach is developed for systematically computing the perturbative expansion of small driven systems. We consider the paradigmatic model of an open $XXZ$ spin 1/2 chain with boundary supported ultralocal Lindblad dissipators and treat two different perturbative cases: (i) expansion in system-bath coupling parameter and (ii) expansion in driving (bias) parameter. In the first case (i) we find that the radius of convergence quickly shrinks with increasing the system size, while in the second case (ii) we find that the convergence radius is always larger than $1$, and in particular it approaches $1$ from above as we change the anisotropy from easy plane ($XY$) to easy axis (Ising) regime.
0
1
0
0
0
0
A 2D metamaterial with auxetic out-of-plane behavior and non-auxetic in-plane behavior
Customarily, in-plane auxeticity and synclastic bending behavior (i.e. out-of-plane auxeticity) are not independent, being the latter a manifestation of the former. Basically, this is a feature of three-dimensional bodies. At variance, two-dimensional bodies have more freedom to deform than three-dimensional ones. Here, we exploit this peculiarity and propose a two-dimensional honeycomb structure with out-of-plane auxetic behavior opposite to the in-plane one. With a suitable choice of the lattice constitutive parameters, in its continuum description such a structure can achieve the whole range of values for the bending Poisson coefficient, while retaining a membranal Poisson coefficient equal to 1. In particular, this structure can reach the extreme values, $-1$ and $+1$, of the bending Poisson coefficient. Analytical calculations are supported by numerical simulations, showing the accuracy of the continuum formulas in predicting the response of the discrete structure.
0
1
0
0
0
0
Isolated Loops in Quantum Feedback Networks
A scheme making use of an isolated feedback loop was recently proposed in \cite{GP_} for creating an arbitrary bilinear Hamiltonian interaction between two multi-mode Linear Quantum Stochastic Systems (LQSSs). In this work we examine the presence of an isolated feedback loop in a general SLH network, and derive the modified Hamiltonian of the network due to the presence of the loop. In the case of a bipartite network with an isolated loop running through both parts, this results in modified Hamiltonians for each subnetwork, as well as a Hamiltonian interaction between them. As in the LQSS case, by engineering appropriate ports in each subnetwork, we may create desired interactions between them. Examples are provided that illustrate the general theory.
0
0
1
0
0
0
Distance-based Depths for Directional Data
Directional data are constrained to lie on the unit sphere of~$\mathbb{R}^q$ for some~$q\geq 2$. To address the lack of a natural ordering for such data, depth functions have been defined on spheres. However, the depths available either lack flexibility or are so computationally expensive that they can only be used for very small dimensions~$q$. In this work, we improve on this by introducing a class of distance-based depths for directional data. Irrespective of the distance adopted, these depths can easily be computed in high dimensions too. We derive the main structural properties of the proposed depths and study how they depend on the distance used. We discuss the asymptotic and robustness properties of the corresponding deepest points. We show the practical relevance of the proposed depths in two applications, related to (i) spherical location estimation and (ii) supervised classification. For both problems, we show through simulation studies that distance-based depths have strong advantages over their competitors.
0
0
1
1
0
0
Two-dimensional Schrödinger symmetry and three-dimensional breathers and Kelvin-ripple complexes as quasi-massive-Nambu-Goldstone modes
Bose-Einstein condensates (BECs) confined in a two-dimensional (2D) harmonic trap are known to possess a hidden 2D Schrödinger symmetry, that is, the Schrödinger symmetry modified by a trapping potential. Spontaneous breaking of this symmetry gives rise to a breathing motion of the BEC, whose oscillation frequency is robustly determined by the strength of the harmonic trap. In this paper, we demonstrate that the concept of the 2D Schrödinger symmetry can be applied to predict the nature of three dimensional (3D) collective modes propagating along a condensate confined in an elongated trap. We find three kinds of collective modes whose existence is robustly ensured by the Schrödinger symmetry, which are physically interpreted as one breather mode and two Kelvin-ripple complex modes, i.e., composite modes in which the vortex core and the condensate surface oscillate interactively. We provide analytical expressions for the dispersion relations (energy-momentum relation) of these modes using the Bogoliubov theory [D. A. Takahashi and M. Nitta, Ann. Phys. 354, 101 (2015)]. Furthermore, we point out that these modes can be interpreted as "quasi-massive-Nambu-Goldstone (NG) modes", that is, they have the properties of both quasi-NG and massive NG modes: quasi-NG modes appear when a symmetry of a part of a Lagrangian, which is not a symmetry of full a Lagrangian, is spontaneously broken, while massive NG modes appear when a modified symmetry is spontaneously broken.
0
1
0
0
0
0
Groupoid of morphisms of groupoids
In this paper we construct two groupoids from morphisms of groupoids, with one from a categorical viewpoint and the other from a geometric viewpoint. We show that for each pair of groupoids, the two kinds of groupoids of morphisms are equivalent. Then we study the automorphism groupoid of a groupoid.
0
0
1
0
0
0
Spin alignment of stars in old open clusters
Stellar clusters form by gravitational collapse of turbulent molecular clouds, with up to several thousand stars per cluster. They are thought to be the birthplace of most stars and therefore play an important role in our understanding of star formation, a fundamental problem in astrophysics. The initial conditions of the molecular cloud establish its dynamical history until the stellar cluster is born. However, the evolution of the cloud's angular momentum during cluster formation is not well understood. Current observations have suggested that turbulence scrambles the angular momentum of the cluster-forming cloud, preventing spin alignment amongst stars within a cluster. Here we use asteroseismology to measure the inclination angles of spin axes in 48 stars from the two old open clusters NGC~6791 and NGC~6819. The stars within each cluster show strong alignment. Three-dimensional hydrodynamical simulations of proto-cluster formation show that at least 50 % of the initial proto-cluster kinetic energy has to be rotational in order to obtain strong stellar-spin alignment within a cluster. Our result indicates that the global angular momentum of the cluster-forming clouds was efficiently transferred to each star and that its imprint has survived after several gigayears since the clusters formed.
0
1
0
0
0
0
Global well-posedness of critical surface quasigeostrophic equation on the sphere
In this paper we prove global well-posedness of the critical surface quasigeostrophic equation on the two dimensional sphere building on some earlier work of the authors. The proof relies on an improving of the previously known pointwise inequality for fractional laplacians as in the work of Constantin and Vicol for the euclidean setting.
0
0
1
0
0
0
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Understanding and characterizing the subspaces of adversarial examples aid in studying the robustness of deep neural networks (DNNs) to adversarial perturbations. Very recently, Ma et al. (ICLR 2018) proposed to use local intrinsic dimensionality (LID) in layer-wise hidden representations of DNNs to study adversarial subspaces. It was demonstrated that LID can be used to characterize the adversarial subspaces associated with different attack methods, e.g., the Carlini and Wagner's (C&W) attack and the fast gradient sign attack. In this paper, we use MNIST and CIFAR-10 to conduct two new sets of experiments that are absent in existing LID analysis and report the limitation of LID in characterizing the corresponding adversarial subspaces, which are (i) oblivious attacks and LID analysis using adversarial examples with different confidence levels; and (ii) black-box transfer attacks. For (i), we find that the performance of LID is very sensitive to the confidence parameter deployed by an attack, and the LID learned from ensembles of adversarial examples with varying confidence levels surprisingly gives poor performance. For (ii), we find that when adversarial examples are crafted from another DNN model, LID is ineffective in characterizing their adversarial subspaces. These two findings together suggest the limited capability of LID in characterizing the subspaces of adversarial examples.
0
0
0
1
0
0
A properly embedded holomorphic disc in the ball with finite area and dense boundary curve
In this paper we construct a properly embedded holomorphic disc in the unit ball $\mathbb{B}^2$ of $\mathbb{C}^2$ having a surprising combination of properties: on the one hand, it has finite area and hence is the zero set of a bounded holomorphic function on $\mathbb{B}^2$; on the other hand, its boundary curve is everywhere dense in the sphere $b\mathbb{B}^2$.
0
0
1
0
0
0
Kähler differential algebras for 0-dimensional schemes
Given a 0-dimensional scheme in a projective space $\mathbb{P}^n$ over a field $K$, we study the Kähler differential algebra $\Omega_{R/K}$ of its homogeneous coordinate ring $R$. Using explicit presentations of the modules $\Omega^m_{R/K}$ of Kähler differential $m$-forms, we determine many values of their Hilbert functions explicitly and bound their Hilbert polynomials and regularity indices. Detailed results are obtained for subschemes of $\mathbb{P}^1$, fat point schemes, and subschemes of $\mathbb{P}^2$ supported on a conic.
0
0
1
0
0
0
Benchmarks and reliable DFT results for spin-crossover complexes
DFT is used throughout nanoscience, especially when modeling spin-dependent properties that are important in spintronics. But standard quantum chemical methods (both CCSD(T) and self-consistent semilocal density functional calculations) fail badly for the spin adiabatic energy difference in Fe(II) spin-crossover complexes. We show that all-electron fixed-node diffusion Monte Carlo can be converged at significant computational cost, and that the B3LYP single-determinant has sufficiently accurate nodes, providing benchmarks for these systems. We also find that density-corrected DFT, using Hartree-Fock densities (HF-DFT), greatly improves accuracy and reduces dependence on approximations for these calculations. The small gap in the self-consistent DFT calculations for the high-spin state is consistent with this. For the spin adiabatic energy differences in these complexes, HF-DFT is both accurate and reliable, and we make a strong prediction for the Fe-Porphyrin complex. The "parameter-dilemma" of needing different amounts of mixing for different properties is eliminated by HF-DFT.
0
1
0
0
0
0
The Road to Success: Assessing the Fate of Linguistic Innovations in Online Communities
We investigate the birth and diffusion of lexical innovations in a large dataset of online social communities. We build on sociolinguistic theories and focus on the relation between the spread of a novel term and the social role of the individuals who use it, uncovering characteristics of innovators and adopters. Finally, we perform a prediction task that allows us to anticipate whether an innovation will successfully spread within a community.
1
0
0
0
0
0