title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Learning Filter Functions in Regularisers by Minimising Quotients
Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones.
0
0
1
0
0
0
Riemann-Theta Boltzmann Machine
A general Boltzmann machine with continuous visible and discrete integer valued hidden states is introduced. Under mild assumptions about the connection matrices, the probability density function of the visible units can be solved for analytically, yielding a novel parametric density function involving a ratio of Riemann-Theta functions. The conditional expectation of a hidden state for given visible states can also be calculated analytically, yielding a derivative of the logarithmic Riemann-Theta function. The conditional expectation can be used as activation function in a feedforward neural network, thereby increasing the modelling capacity of the network. Both the Boltzmann machine and the derived feedforward neural network can be successfully trained via standard gradient- and non-gradient-based optimization techniques.
1
0
0
1
0
0
A Joint Quantile and Expected Shortfall Regression Framework
We introduce a novel regression framework which simultaneously models the quantile and the Expected Shortfall (ES) of a response variable given a set of covariates. This regression is based on a strictly consistent loss function for the pair quantile and ES, which allows for M- and Z-estimation of the joint regression parameters. We show consistency and asymptotic normality for both estimators under weak regularity conditions. The underlying loss function depends on two specification functions, whose choice affects the properties of the resulting estimators. We find that the Z-estimator is numerically unstable and thus, we rely on M-estimation of the model parameters. Extensive simulations verify the asymptotic properties and analyze the small sample behavior of the M-estimator for different specification functions. This joint regression framework allows for various applications including estimating, forecasting, and backtesting ES, which is particularly relevant in light of the recent introduction of ES into the Basel Accords.
0
0
1
1
0
0
Poincaré Embeddings for Learning Hierarchical Representations
Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.
1
0
0
1
0
0
Randomly cross-linked polymer models
Polymer models are used to describe chromatin, which can be folded at different spatial scales by binding molecules. By folding, chromatin generates loops of various sizes. We present here a randomly cross-linked (RCL) polymer model, where monomer pairs are connected randomly. We obtain asymptotic formulas for the steady-state variance, encounter probability, the radius of gyration, instantaneous displacement and the mean first encounter time between any two monomers. The analytical results are confirmed by Brownian simulations. Finally, the present results can be used to extract the minimum number of cross-links in a chromatin region from {conformation capture} data.
0
1
0
0
0
0
Multi-Stage Complex Contagions in Random Multiplex Networks
Complex contagion models have been developed to understand a wide range of social phenomena such as adoption of cultural fads, the diffusion of belief, norms, and innovations in social networks, and the rise of collective action to join a riot. Most existing works focus on contagions where individuals' states are represented by {\em binary} variables, and propagation takes place over a single isolated network. However, characterization of an individual's standing on a given matter as a binary state might be overly simplistic as most of our opinions, feelings, and perceptions vary over more than two states. Also, most real-world contagions take place over multiple networks (e.g., Twitter and Facebook) or involve {\em multiplex} networks where individuals engage in different {\em types} of relationships (e.g., acquaintance, co-worker, family, etc.). To this end, this paper studies {\em multi-stage} complex contagions that take place over multi-layer or multiplex networks. Under a linear threshold based contagion model, we give analytic results for the probability and expected size of \textit{global} cascades, i.e., cases where a randomly chosen node can initiate a propagation that eventually reaches a {\em positive} fraction of the whole population. Analytic results are also confirmed and supported by an extensive numerical study. In particular, we demonstrate how the dynamics of complex contagions is affected by the extra weight exerted by \textit{hyper-active} nodes and by the structural properties of the networks involved. Among other things, we reveal an interesting connection between the assortativity of a network and the impact of \textit{hyper-active} nodes on the cascade size.
1
0
0
0
0
0
Secrecy and Robustness for Active Attack in Secure Network Coding and its Application to Network Quantum Key Distribution
In network coding, we discuss the effect of sequential error injection on information leakage. We show that there is no improvement when the operations in the network are linear operations. However, when the operations in the network contains non-linear operations, we find a counterexample to improve Eve's obtained information. Furthermore, we discuss the asymptotic rate in a linear network under the secrecy and robustness conditions as well as under the secrecy condition alone. Finally, we apply our results to network quantum key distribution, which clarifies the type of network that enables us to realize secure long distance communication via short distance quantum key distribution.
1
0
0
0
0
0
SLAMBooster: An Application-aware Controller for Approximation in SLAM
Simultaneous Localization and Mapping (SLAM) is the problem of constructing a map of an agent's environment while localizing or tracking the mobile agent's position and orientation within the map. Algorithms for SLAM have high computational requirements, which has hindered their use on embedded devices. Approximation can be used to reduce the time and energy requirements of SLAM implementations as long as the approximations do not prevent the agent from navigating correctly through the environment. Previous studies of approximation in SLAM have assumed that the entire trajectory of the agent is known before the agent starts to move, and they have focused on offline controllers that use features of the trajectory to set approximation knobs at the start of the trajectory. In practice, the trajectory is not usually known ahead of time, and allowing knob settings to change dynamically opens up more opportunities for reducing computation time and energy. We describe SLAMBooster, an application-aware online control system for SLAM that adaptively controls approximation knobs during the motion of the agent. SLAMBooster is based on a control technique called hierarchical proportional control but our experiments showed this application-agnostic control led to an unacceptable reduction in the quality of localization. To address this problem, SLAMBooster exploits domain knowledge: it uses features extracted from input frames and from the estimated motion of the agent in its algorithm for controlling approximation. We implemented SLAMBooster in the open-source SLAMBench framework. Our experiments show that SLAMBooster reduces the computation time and energy consumption by around half on the average on an embedded platform, while maintaining the accuracy of the localization within reasonable bounds. These improvements make it feasible to deploy SLAM on a wider range of devices.
1
0
0
0
0
0
Modeling sepsis progression using hidden Markov models
Characterizing a patient's progression through stages of sepsis is critical for enabling risk stratification and adaptive, personalized treatment. However, commonly used sepsis diagnostic criteria fail to account for significant underlying heterogeneity, both between patients as well as over time in a single patient. We introduce a hidden Markov model of sepsis progression that explicitly accounts for patient heterogeneity. Benchmarked against two sepsis diagnostic criteria, the model provides a useful tool to uncover a patient's latent sepsis trajectory and to identify high-risk patients in whom more aggressive therapy may be indicated.
0
0
0
1
1
0
Optimized Bacteria are Environmental Prediction Engines
Experimentalists have observed phenotypic variability in isogenic bacteria populations. We explore the hypothesis that in fluctuating environments this variability is tuned to maximize a bacterium's expected log growth rate, potentially aided by epigenetic markers that store information about past environments. We show that, in a complex, memoryful environment, the maximal expected log growth rate is linear in the instantaneous predictive information---the mutual information between a bacterium's epigenetic markers and future environmental states. Hence, under resource constraints, optimal epigenetic markers are causal states---the minimal sufficient statistics for prediction. This is the minimal amount of information about the past needed to predict the future as well as possible. We suggest new theoretical investigations into and new experiments on bacteria phenotypic bet-hedging in fluctuating complex environments.
0
0
0
0
1
0
Photo-realistic Facial Texture Transfer
Style transfer methods have achieved significant success in recent years with the use of convolutional neural networks. However, many of these methods concentrate on artistic style transfer with few constraints on the output image appearance. We address the challenging problem of transferring face texture from a style face image to a content face image in a photorealistic manner without changing the identity of the original content image. Our framework for face texture transfer (FaceTex) augments the prior work of MRF-CNN with a novel facial semantic regularization that incorporates a face prior regularization smoothly suppressing the changes around facial meso-structures (e.g eyes, nose and mouth) and a facial structure loss function which implicitly preserves the facial structure so that face texture can be transferred without changing the original identity. We demonstrate results on face images and compare our approach with recent state-of-the-art methods. Our results demonstrate superior texture transfer because of the ability to maintain the identity of the original face image.
1
0
0
0
0
0
Segmentation of skin lesions based on fuzzy classification of pixels and histogram thresholding
This paper proposes an innovative method for segmentation of skin lesions in dermoscopy images developed by the authors, based on fuzzy classification of pixels and histogram thresholding.
1
0
0
1
0
0
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping. Model-free deep reinforcement learning (RL) has been successfully applied to a range of challenging environments, but the proliferation of algorithms makes it difficult to discern which particular approach would be best suited for a rich, diverse task like grasping. To answer this question, we propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects. Off-policy learning enables utilization of grasping data over a wide variety of objects, and diversity is important to enable the method to generalize to new objects that were not seen during training. We evaluate the benchmark tasks against a variety of Q-function estimation methods, a method previously proposed for robotic grasping with deep neural network models, and a novel approach based on a combination of Monte Carlo return estimation and an off-policy correction. Our results indicate that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning, and our analysis of stability sheds light on the relative tradeoffs between the algorithms.
1
0
0
1
0
0
Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification
The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semisupervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.
1
0
0
0
0
0
Millimeter-scale layered MoSe2 grown on sapphire and evidence for negative magnetoresistance
Molecular beam epitaxy technique has been used to deposit a single layer and a bilayer of MoSe 2 on sapphire. Extensive characterizations including in-situ and ex-situ measurements show that the layered MoSe 2 grows in a scalable manner on the substrate and reveals characteristics of a stoichiometric 2H-phase. The layered MoSe 2 exhibits polycrystalline features with domains separated by defects and boundaries. Temperature and magnetic field dependent resistivity measurements unveil a carrier hopping character described within two-dimensional variable range hopping mechanism. Moreover, a negative magnetoresistance was observed, stressing a fascinating feature of the charge transport under the application of a magnetic field in the layered MoSe 2 system. This negative magnetoresistance observed at millimeter-scale is similar to that observed recently at room temperature inWS2 flakes at a micrometer scale [Zhang et al., Appl. Phys. Lett. 108, 153114 (2016)]. This scalability highlights the fact that the underlying physical mechanism is intrinsic to these two-dimensional materials and occurs at very short scale.
0
1
0
0
0
0
Bootstrapping incremental dialogue systems from minimal data: the generalisation power of dialogue grammars
We investigate an end-to-end method for automatically inducing task-based dialogue systems from small amounts of unannotated dialogue data. It combines an incremental semantic grammar - Dynamic Syntax and Type Theory with Records (DS-TTR) - with Reinforcement Learning (RL), where language generation and dialogue management are a joint decision problem. The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue. We hypothesised that the rich linguistic knowledge within the grammar should enable a combinatorially large number of dialogue variations to be processed, even when trained on very few dialogues. Our experiments show that our model can process 74% of the Facebook AI bAbI dataset even when trained on only 0.13% of the data (5 dialogues). It can in addition process 65% of bAbI+, a corpus we created by systematically adding incremental dialogue phenomena such as restarts and self-corrections to bAbI. We compare our model with a state-of-the-art retrieval model, MemN2N. We find that, in terms of semantic accuracy, MemN2N shows very poor robustness to the bAbI+ transformations even when trained on the full bAbI dataset.
1
0
0
0
0
0
Learning Deep Latent Spaces for Multi-Label Classification
Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.
1
0
0
0
0
0
Encoding Multi-Resolution Brain Networks Using Unsupervised Deep Learning
The main goal of this study is to extract a set of brain networks in multiple time-resolutions to analyze the connectivity patterns among the anatomic regions for a given cognitive task. We suggest a deep architecture which learns the natural groupings of the connectivity patterns of human brain in multiple time-resolutions. The suggested architecture is tested on task data set of Human Connectome Project (HCP) where we extract multi-resolution networks, each of which corresponds to a cognitive task. At the first level of this architecture, we decompose the fMRI signal into multiple sub-bands using wavelet decompositions. At the second level, for each sub-band, we estimate a brain network extracted from short time windows of the fMRI signal. At the third level, we feed the adjacency matrices of each mesh network at each time-resolution into an unsupervised deep learning algorithm, namely, a Stacked De- noising Auto-Encoder (SDAE). The outputs of the SDAE provide a compact connectivity representation for each time window at each sub-band of the fMRI signal. We concatenate the learned representations of all sub-bands at each window and cluster them by a hierarchical algorithm to find the natural groupings among the windows. We observe that each cluster represents a cognitive task with a performance of 93% Rand Index and 71% Adjusted Rand Index. We visualize the mean values and the precisions of the networks at each component of the cluster mixture. The mean brain networks at cluster centers show the variations among cognitive tasks and the precision of each cluster shows the within cluster variability of networks, across the subjects.
0
0
0
1
0
0
Improper Filter Reduction
Combinatorial filters have been the subject of increasing interest from the robotics community in recent years. This paper considers automatic reduction of combinatorial filters to a given size, even if that reduction necessitates changes to the filter's behavior. We introduce an algorithmic problem called improper filter reduction, in which the input is a combinatorial filter F along with an integer k representing the target size. The output is another combinatorial filter F' with at most k states, such that the difference in behavior between F and F' is minimal. We present two metrics for measuring the distance between pairs of filters, describe dynamic programming algorithms for computing these distances, and show that improper filter reduction is NP-hard under these metrics. We then describe two heuristic algorithms for improper filter reduction, one greedy sequential approach, and one randomized global approach based on prior work on weighted improper graph coloring. We have implemented these algorithms and analyze the results of three sets of experiments.
1
0
0
0
0
0
Collective Dynamics of Self-propelled Semiflexible Filaments
The collective behavior of active semiflexible filaments is studied with a model of tangentially driven self-propelled worm-like chains. The combination of excluded-volume interactions and self-propulsion leads to several distinct dynamic phases as a function of bending rigidity, activity, and aspect ratio of individual filaments. We consider first the case of intermediate filament density. For high-aspect-ratio filaments, we identify a transition with increasing propulsion from a state of free-swimming filaments to a state of spiraled filaments with nearly frozen translational motion. For lower aspect ratios, this gas-of-spirals phase is suppressed with growing density due to filament collisions; instead, filaments form clusters similar to self-propelled rods, as activity increases. Finite bending rigidity strongly effects the dynamics and phase behavior. Flexible filaments form small and transient clusters, while stiffer filaments organize into giant clusters, similarly as self-propelled rods, but with a reentrant phase behavior from giant to smaller clusters as activity becomes large enough to bend the filaments. For high filament densities, we identify a nearly frozen jamming state at low activities, a nematic laning state at intermediate activities, and an active-turbulence state at high activities. The latter state is characterized by a power-law decay of the energy spectrum as a function of wave number. The resulting phase diagrams encapsulate tunable non-equilibrium steady states that can be used in the organization of living matter.
0
0
0
0
1
0
Two sources of poor coverage of confidence intervals after model selection
We compare the following two sources of poor coverage of post-model-selection confidence intervals: the preliminary data-based model selection sometimes chooses the wrong model and the data used to choose the model is re-used for the construction of the confidence interval.
0
0
1
1
0
0
On the ERM Principle with Networked Data
Networked data, in which every training example involves two objects and may share some common objects with others, is used in many machine learning tasks such as learning to rank and link prediction. A challenge of learning from networked examples is that target values are not known for some pairs of objects. In this case, neither the classical i.i.d.\ assumption nor techniques based on complete U-statistics can be used. Most existing theoretical results of this problem only deal with the classical empirical risk minimization (ERM) principle that always weights every example equally, but this strategy leads to unsatisfactory bounds. We consider general weighted ERM and show new universal risk bounds for this problem. These new bounds naturally define an optimization problem which leads to appropriate weights for networked examples. Though this optimization problem is not convex in general, we devise a new fully polynomial-time approximation scheme (FPTAS) to solve it.
1
0
0
1
0
0
Front interaction induces excitable behavior
Spatially extended systems can support local transient excitations in which just a part of the system is excited. The mechanisms reported so far are local excitability and excitation of a localized structure. Here we introduce an alternative mechanism based on the coexistence of two homogeneous stable states and spatial coupling. We show the existence of a threshold for perturbations of the homogeneous state. Sub-threshold perturbations decay exponentially. Super-threshold perturbations induce the emergence of a long-lived structure formed by two back to back fronts that join the two homogeneous states. While in typical excitability the trajectory follows the remnants of a limit cycle, here reinjection is provided by front interaction, such that fronts slowly approach each other until eventually annihilating. This front-mediated mechanism shows that extended systems with no oscillatory regimes can display excitability.
0
1
1
0
0
0
Characterization of optimal carbon nanotubes under stretching and validation of the Cauchy-Born rule
Carbon nanotubes are modeled as point configurations and investigated by minimizing configurational energies including two-and three-body interactions. Optimal configurations are identified with local minima and their fine geometry is fully characterized in terms of lower-dimensional problems. Under moderate tension, we prove the existence of periodic local minimizers, which indeed validates the so-called Cauchy-Born rule in this setting.
0
1
1
0
0
0
The Closer the Better: Similarity of Publication Pairs at Different Co-Citation Levels
We investigate the similarities of pairs of articles which are co-cited at the different co-citation levels of the journal, article, section, paragraph, sentence and bracket. Our results indicate that textual similarity, intellectual overlap (shared references), author overlap (shared authors), proximity in publication time all rise monotonically as the co-citation level gets lower (from journal to bracket). While the main gain in similarity happens when moving from journal to article co-citation, all level changes entail an increase in similarity, especially section to paragraph and paragraph to sentence/bracket levels. We compare results from four journals over the years 2010-2015: Cell, the European Journal of Operational Research, Physics Letters B and Research Policy, with consistent general outcomes and some interesting differences. Our findings motivate the use of granular co-citation information as defined by meaningful units of text, with implications for, among others, the elaboration of maps of science and the retrieval of scholarly literature.
1
0
0
0
0
0
MP2-F12 Basis Set Convergence for the S66 Noncovalent Interactions Benchmark: Transferability of the Complementary Auxiliary Basis Set (CABS)
Complementary auxiliary basis sets for F12 explicitly correlated calculations appear to be more transferable between orbital basis sets than has been generally assumed. We also find that aVnZ-F12 basis sets, originally developed with anionic systems in mind, appear to be superior for noncovalent interactions as well, and propose a suitable CABS sequence for them.
0
1
0
0
0
0
Quantized Minimum Error Entropy Criterion
Comparing with traditional learning criteria, such as mean square error (MSE), the minimum error entropy (MEE) criterion is superior in nonlinear and non-Gaussian signal processing and machine learning. The argument of the logarithm in Renyis entropy estimator, called information potential (IP), is a popular MEE cost in information theoretic learning (ITL). The computational complexity of IP is however quadratic in terms of sample number due to double summation. This creates computational bottlenecks especially for large-scale datasets. To address this problem, in this work we propose an efficient quantization approach to reduce the computational burden of IP, which decreases the complexity from O(N*N) to O (MN) with M << N. The new learning criterion is called the quantized MEE (QMEE). Some basic properties of QMEE are presented. Illustrative examples are provided to verify the excellent performance of QMEE.
1
0
0
1
0
0
Traffic Flow Forecasting Using a Spatio-Temporal Bayesian Network Predictor
A novel predictor for traffic flow forecasting, namely spatio-temporal Bayesian network predictor, is proposed. Unlike existing methods, our approach incorporates all the spatial and temporal information available in a transportation network to carry our traffic flow forecasting of the current site. The Pearson correlation coefficient is adopted to rank the input variables (traffic flows) for prediction, and the best-first strategy is employed to select a subset as the cause nodes of a Bayesian network. Given the derived cause nodes and the corresponding effect node in the spatio-temporal Bayesian network, a Gaussian Mixture Model is applied to describe the statistical relationship between the input and output. Finally, traffic flow forecasting is performed under the criterion of Minimum Mean Square Error (M.M.S.E.). Experimental results with the urban vehicular flow data of Beijing demonstrate the effectiveness of our presented spatio-temporal Bayesian network predictor.
1
0
0
1
0
0
AlteregoNets: a way to human augmentation
A person dependent network, called an AlterEgo net, is proposed for development. The networks are created per person. It receives at input an object descriptions and outputs a simulation of the internal person's representation of the objects. The network generates a textual stream resembling the narrative stream of consciousness depicting multitudinous thoughts and feelings related to a perceived object. In this way, the object is described not by a 'static' set of its properties, like a dictionary, but by the stream of words and word combinations referring to the object. The network simulates a person's dialogue with a representation of the object. It is based on an introduced algorithmic scheme, where perception is modeled by two interacting iterative cycles, reminding one respectively the forward and backward propagation executed at training convolution neural networks. The 'forward' iterations generate a stream representing the 'internal world' of a human. The 'backward' iterations generate a stream representing an internal representation of the object. People perceive the world differently. Tuning AlterEgo nets to a specific person or group of persons, will allow simulation of their thoughts and feelings. Thereby these nets is potentially a new human augmentation technology for various applications.
1
0
0
0
0
0
Modalities in homotopy type theory
Univalent homotopy type theory (HoTT) may be seen as a language for the category of $\infty$-groupoids. It is being developed as a new foundation for mathematics and as an internal language for (elementary) higher toposes. We develop the theory of factorization systems, reflective subuniverses, and modalities in homotopy type theory, including their construction using a "localization" higher inductive type. This produces in particular the ($n$-connected, $n$-truncated) factorization system as well as internal presentations of subtoposes, through lex modalities. We also develop the semantics of these constructions.
1
0
1
0
0
0
Exponentially Slow Heating in Short and Long-range Interacting Floquet Systems
We analyze the dynamics of periodically-driven (Floquet) Hamiltonians with short- and long-range interactions, finding clear evidence for a thermalization time, $\tau^*$, that increases exponentially with the drive frequency. We observe this behavior, both in systems with short-ranged interactions, where our results are consistent with rigorous bounds, and in systems with long-range interactions, where such bounds do not exist at present. Using a combination of heating and entanglement dynamics, we explicitly extract the effective energy scale controlling the rate of thermalization. Finally, we demonstrate that for times shorter than $\tau^*$, the dynamics of the system is well-approximated by evolution under a time-independent Hamiltonian $D_{\mathrm{eff}}$, for both short- and long-range interacting systems.
0
1
0
0
0
0
High moments of the Estermann function
For $a/q\in\mathbb{Q}$ the Estermann function is defined as $D(s,a/q):=\sum_{n\geq1}d(n)n^{-s}\operatorname{e}(n\frac aq)$ if $\Re(s)>1$ and by meromorphic continuation otherwise. For $q$ prime, we compute the moments of $D(s,a/q)$ at the central point $s=1/2$, when averaging over $1\leq a<q$. As a consequence we deduce the asymptotic for the iterated moment of Dirichlet $L$-functions $\sum_{\chi_1,\dots,\chi_k\mod q}|L(\frac12,\chi_1)|^2\cdots |L(\frac12,\chi_k)|^2|L(\frac12,\chi_1\cdots \chi_k)|^2$, obtaining a power saving error term. Also, we compute the moments of certain functions defined in terms of continued fractions. For example, writing $f_{\pm}(a/q):=\sum_{j=0}^r (\pm1)^jb_j$ where $[0;b_0,\dots,b_r]$ is the continued fraction expansion of $a/q$ we prove that for $k\geq2$ and $q$ primes one has $\sum_{a=1}^{q-1}f_{\pm}(a/q)^k\sim2 \frac{\zeta(k)^2}{\zeta(2k)} q^k$ as $q\to\infty$.
0
0
1
0
0
0
On Game-Theoretic Risk Management (Part Three) - Modeling and Applications
The game-theoretic risk management framework put forth in the precursor reports "Towards a Theory of Games with Payoffs that are Probability-Distributions" (arXiv:1506.07368 [q-fin.EC]) and "Algorithms to Compute Nash-Equilibria in Games with Distributions as Payoffs" (arXiv:1511.08591v1 [q-fin.EC]) is herein concluded by discussing how to integrate the previously developed theory into risk management processes. To this end, we discuss how loss models (primarily but not exclusively non-parametric) can be constructed from data. Furthermore, hints are given on how a meaningful game theoretic model can be set up, and how it can be used in various stages of the ISO 27000 risk management process. Examples related to advanced persistent threats and social engineering are given. We conclude by a discussion on the meaning and practical use of (mixed) Nash equilibria equilibria for risk management.
0
0
1
1
0
0
PKS 1954-388: RadioAstron Detection on 80,000 km Baselines and Multiwavelength Observations
We present results from a multiwavelength study of the blazar PKS 1954-388 at radio, UV, X-ray, and gamma-ray energies. A RadioAstron observation at 1.66 GHz in June 2012 resulted in the detection of interferometric fringes on baselines of 6.2 Earth-diameters. This suggests a source frame brightness temperature of greater than 2x10^12 K, well in excess of both equipartition and inverse Compton limits and implying the existence of Doppler boosting in the core. An 8.4 GHz TANAMI VLBI image, made less than a month after the RadioAstron observations, is consistent with a previously reported superluminal motion for a jet component. Flux density monitoring with the Australia Telescope Compact Array confirms previous evidence for long-term variability that increases with observing frequency. A search for more rapid variability revealed no evidence for significant day-scale flux density variation. The ATCA light-curve reveals a strong radio flare beginning in late 2013 which peaks higher, and earlier, at higher frequencies. Comparison with the Fermi gamma-ray light-curve indicates this followed ~9 months after the start of a prolonged gamma-ray high-state -- a radio lag comparable to that seen in other blazars. The multiwavelength data are combined to derive a Spectral Energy Distribution, which is fitted by a one-zone synchrotron-self-Compton (SSC) model with the addition of external Compton (EC) emission.
0
1
0
0
0
0
Smart Assessment of and Tutoring for Computational Thinking MOOC Assignments using MindReader
One of the major hurdles toward automatic semantic understanding of computer programs is the lack of knowledge about what constitutes functional equivalence of code segments. We postulate that a sound knowledgebase can be used to deductively understand code segments in a hierarchical fashion by first de-constructing a code and then reconstructing it from elementary knowledge and equivalence rules of elementary code segments. The approach can also be engineered to produce computable programs from conceptual and abstract algorithms as an inverse function. In this paper, we introduce the core idea behind the MindReader online assessment system that is able to understand a wide variety of elementary algorithms students learn in their entry level programming classes such as Java, C++ and Python. The MindReader system is able to assess student assignments and guide them how to develop correct and better code in real time without human assistance.
1
0
0
0
0
0
EasyInterface: A toolkit for rapid development of GUIs for research prototype tools
In this paper we describe EasyInterface, an open-source toolkit for rapid development of web-based graphical user interfaces (GUIs). This toolkit addresses the need of researchers to make their research prototype tools available to the community, and integrating them in a common environment, rapidly and without being familiar with web programming or GUI libraries in general. If a tool can be executed from a command-line and its output goes to the standard output, then in few minutes one can make it accessible via a web-interface or within Eclipse. Moreover, the toolkit defines a text-based language that can be used to get more sophisticated GUIs, e.g., syntax highlighting, dialog boxes, user interactions, etc. EasyInterface was originally developed for building a common frontend for tools developed in the Envisage project.
1
0
0
0
0
0
The Cramér-Rao inequality on singular statistical models I
We introduce the notion of the essential tangent bundle of a parametrized measure model and the notion of reduced Fisher metric on a (possibly singular) 2-integrable measure model. Using these notions and a new characterization of $k$-integrable parametrized measure models, we extend the Cramér-Rao inequality to $2$-integrable (possibly singular) statistical models for general $\varphi$-estimations, where $\varphi$ is a $V$-valued feature function and $V$ is a topological vector space. Thus we derive an intrinsic Cramér-Rao inequality in the most general terms of parametric statistics.
0
0
1
1
0
0
Autotune: A Derivative-free Optimization Framework for Hyperparameter Tuning
Machine learning applications often require hyperparameter tuning. The hyperparameters usually drive both the efficiency of the model training process and the resulting model quality. For hyperparameter tuning, machine learning algorithms are complex black-boxes. This creates a class of challenging optimization problems, whose objective functions tend to be nonsmooth, discontinuous, unpredictably varying in computational expense, and include continuous, categorical, and/or integer variables. Further, function evaluations can fail for a variety of reasons including numerical difficulties or hardware failures. Additionally, not all hyperparameter value combinations are compatible, which creates so called hidden constraints. Robust and efficient optimization algorithms are needed for hyperparameter tuning. In this paper we present an automated parallel derivative-free optimization framework called \textbf{Autotune}, which combines a number of specialized sampling and search methods that are very effective in tuning machine learning models despite these challenges. Autotune provides significantly improved models over using default hyperparameter settings with minimal user interaction on real-world applications. Given the inherent expense of training numerous candidate models, we demonstrate the effectiveness of Autotune's search methods and the efficient distributed and parallel paradigms for training and tuning models, and also discuss the resource trade-offs associated with the ability to both distribute the training process and parallelize the tuning process.
0
0
0
1
0
0
Reduced fusion systems over $p$-groups with abelian subgroup of index $p$: III
We finish the classification, begun in two earlier papers, of all simple fusion systems over finite nonabelian $p$-groups with an abelian subgroup of index $p$. In particular, this gives many new examples illustrating the enormous variety of exotic examples that can arise. In addition, we classify all simple fusion systems over infinite nonabelian discrete $p$-toral groups with an abelian subgroup of index $p$. In all of these cases (finite or infinite), we reduce the problem to one of listing all $\mathbb{F}_pG$-modules (for $G$ finite) satisfying certain conditions: a problem which was solved in the earlier paper by Craven, Oliver, and Semeraro using the classification of finite simple groups.
0
0
1
0
0
0
Deep learning for studies of galaxy morphology
Establishing accurate morphological measurements of galaxies in a reasonable amount of time for future big-data surveys such as EUCLID, the Large Synoptic Survey Telescope or the Wide Field Infrared Survey Telescope is a challenge. Because of its high level of abstraction with little human intervention, deep learning appears to be a promising approach. Deep learning is a rapidly growing discipline that models high-level patterns in data as complex multilayered networks. In this work we test the ability of deep convolutional networks to provide parametric properties of Hubble Space Telescope like galaxies (half-light radii, Sersic indices, total flux etc..). We simulate a set of galaxies including point spread function and realistic noise from the CANDELS survey and try to recover the main galaxy parameters using deep-learning. We com- pare the results with the ones obtained with the commonly used profile fitting based software GALFIT. This way showing that with our method we obtain results at least equally good as the ones obtained with GALFIT but, once trained, with a factor 5 hundred time faster.
0
1
0
0
0
0
A Diversified Multi-Start Algorithm for Unconstrained Binary Quadratic Problems Leveraging the Graphics Processor Unit
Multi-start algorithms are a common and effective tool for metaheuristic searches. In this paper we amplify multi-start capabilities by employing the parallel processing power of the graphics processer unit (GPU) to quickly generate a diverse starting set of solutions for the Unconstrained Binary Quadratic Optimization Problem which are evaluated and used to implement screening methods to select solutions for further optimization. This method is implemented as an initial high quality solution generation phase prior to a secondary steepest ascent search and a comparison of results to best known approaches on benchmark unconstrained binary quadratic problems demonstrates that GPU-enabled diversified multi-start with screening quickly yields very good results.
1
0
0
0
0
0
Low-rank and Sparse NMF for Joint Endmembers' Number Estimation and Blind Unmixing of Hyperspectral Images
Estimation of the number of endmembers existing in a scene constitutes a critical task in the hyperspectral unmixing process. The accuracy of this estimate plays a crucial role in subsequent unsupervised unmixing steps i.e., the derivation of the spectral signatures of the endmembers (endmembers' extraction) and the estimation of the abundance fractions of the pixels. A common practice amply followed in literature is to treat endmembers' number estimation and unmixing, independently as two separate tasks, providing the outcome of the former as input to the latter. In this paper, we go beyond this computationally demanding strategy. More precisely, we set forth a multiple constrained optimization framework, which encapsulates endmembers' number estimation and unsupervised unmixing in a single task. This is attained by suitably formulating the problem via a low-rank and sparse nonnegative matrix factorization rationale, where low-rankness is promoted with the use of a sophisticated $\ell_2/\ell_1$ norm penalty term. An alternating proximal algorithm is then proposed for minimizing the emerging cost function. The results obtained by simulated and real data experiments verify the effectiveness of the proposed approach.
1
0
0
1
0
0
On the Necessity of Structured Codes for Communications over MAC with Feedback
The problem of three-user multiple-access channel (MAC) with noiseless feedback is investigated. A new coding strategy is presented. The coding scheme builds upon the natural extension of the Cover-Leung (CL) scheme; and uses quasi-linear codes. A new single-letter achievable rate region is derived. The new achievable region strictly contains the CL region. This is shown through an example. In this example, the coding scheme achieves optimality in terms of transmission rates. It is shown that any optimality achieving scheme for this example must have a specific algebraic structure. Particularly, the codebooks must be closed under binary addition.
1
0
0
0
0
0
Historical Review of Recurrence Plots
In the last two decades recurrence plots (RPs) were introduced in many different scientific disciplines. It turned out how powerful this method is. After introducing approaches of quantification of RPs and by the study of relationships between RPs and fundamental properties of dynamical systems, this method attracted even more attention. After 20 years of RPs it is time to summarise this development in a historical context.
0
1
0
0
0
0
The Meta Distribution of the SIR for Cellular Networks with Power Control
The meta distribution of the signal-to-interference ratio (SIR) provides fine-grained information about the performance of individual links in a wireless network. This paper focuses on the analysis of the meta distribution of the SIR for both the cellular network uplink and downlink with fractional power control. For the uplink scenario, an approximation of the interfering user point process with a non-homogeneous Poisson point process is used. The moments of the meta distribution for both scenarios are calculated. Some bounds, the analytical expression, the mean local delay, and the beta approximation of the meta distribution are provided. The results give interesting insights into the effect of the power control in both the uplink and downlink. Detailed simulations show that the approximations made in the analysis are well justified.
1
0
0
0
0
0
Temperature induced transition from p-n to n-n electronic behavior in Ni0.07Zn0.93O/Mg0.21Zn0.79O heterojunction
The transport characteristics across the pulsed laser deposited Ni0.07Zn0.93O/Mg0.21Zn0.79O heterojunction exhibits p-n type semiconducting properties at 10 K while at 100 K, its characteristics become similar to that of an n-n junction. The reason for the same is attributed to the role of larger electronegativity of Ni as compared to Mg at 10 K and ionization of impurity states at 100 K. The above behavior is confirmed by performing the Hall measurements.
0
1
0
0
0
0
N-body simulations of gravitational redshifts and other relativistic distortions of galaxy clustering
Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.
0
1
0
0
0
0
Few-Shot Learning with Metric-Agnostic Conditional Embeddings
Learning high quality class representations from few examples is a key problem in metric-learning approaches to few-shot learning. To accomplish this, we introduce a novel architecture where class representations are conditioned for each few-shot trial based on a target image. We also deviate from traditional metric-learning approaches by training a network to perform comparisons between classes rather than relying on a static metric comparison. This allows the network to decide what aspects of each class are important for the comparison at hand. We find that this flexible architecture works well in practice, achieving state-of-the-art performance on the Caltech-UCSD birds fine-grained classification task.
0
0
0
1
0
0
Gradient Coding from Cyclic MDS Codes and Expander Graphs
Gradient coding is a technique for straggler mitigation in distributed learning. In this paper we design novel gradient codes using tools from classical coding theory, namely, cyclic MDS codes, which compare favourably with existing solutions, both in the applicable range of parameters and in the complexity of the involved algorithms. Second, we introduce an approximate variant of the gradient coding problem, in which we settle for approximate gradient computation instead of the exact one. This approach enables graceful degradation, i.e., the $\ell_2$ error of the approximate gradient is a decreasing function of the number of stragglers. Our main result is that the normalized adjacency matrix of an expander graph can yield excellent approximate gradient codes, and that this approach allows us to perform significantly less computation compared to exact gradient coding. We experimentally test our approach on Amazon EC2, and show that the generalization error of approximate gradient coding is very close to the full gradient while requiring significantly less computation from the workers.
0
0
0
1
0
0
Commissioning of te China-ADS injector-I testing facility
The 10 MeV accelerator-driven subcritical system (ADS) Injector-I test stand at Institute of High Energy Physics (IHEP) is a testing facility dedicated to demonstrate one of the two injector design schemes [Injector Scheme-I, which works at 325 MHz], for the ADS project in China. The Injector adopted a four vane copper structure RFQ with output energy of 3.2 MeV and a superconducting (SC) section accommodating fourteen \b{eta}g=0.12 single spoke cavities, fourteen SC solenoids and fourteen cold BPMs. The ion source was installed since April of 2014, periods of commissioning are regularly scheduled between installation phases of the rest of the injector. Continuous wave (CW) beam was shooting through the injector and 10 MeV CW proton beam with average beam current around 2 mA was obtained recently. This contribution describe the results achieved so far and the difficulties encountered in CW commissioning.
0
1
0
0
0
0
Impact of the Global Crisis on SME Internal vs. External Financing in China
Changes in the capital structure before and after the global financial crisis for SMEs are studied, emphasizing their financing problems, distinguishing between internal financing and external financing determinants. The empirical research bears upon 158 small and medium-sized firms listed on Shenzhen and Shanghai Stock Exchanges in China over the period of 2004-2014. A regression analysis, along the lines of the Trade-Off Theory, shows that the leverage decreases with profitability, non-debt tax shields and the liquidity, and increases with firm size and tangibility. A positive relationship is found between firm growth and debt ratio, though not highly significantly. It is shown that the SMEs with high growth rates are those which will more easily obtain external financing after a financial crisis. It is recognized that the China government should reconsider SMEs taxation laws.
0
0
0
1
0
0
Dipole force free optical control and cooling of nanofiber trapped atoms
The evanescent field surrounding nano-scale optical waveguides offers an efficient interface between light and mesoscopic ensembles of neutral atoms. However, the thermal motion of trapped atoms, combined with the strong radial gradients of the guided light, leads to a time-modulated coupling between atoms and the light mode, thus giving rise to additional noise and motional dephasing of collective states. Here, we present a dipole force free scheme for coupling of the radial motional states, utilizing the strong intensity gradient of the guided mode and demonstrate all-optical coupling of the cesium hyperfine ground states and motional sideband transitions. We utilize this to prolong the trap lifetime of an atomic ensemble by Raman sideband cooling of the radial motion, which has not been demonstrated in nano-optical structures previously. Our work points towards full and independent control of internal and external atomic degrees of freedom using guided light modes only.
0
1
0
0
0
0
Control refinement for discrete-time descriptor systems: a behavioural approach via simulation relations
The analysis of industrial processes, modelled as descriptor systems, is often computationally hard due to the presence of both algebraic couplings and difference equations of high order. In this paper, we introduce a control refinement notion for these descriptor systems that enables analysis and control design over related reduced-order systems. Utilising the behavioural framework, we extend upon the standard hierarchical control refinement for ordinary systems and allow for algebraic couplings inherent to descriptor systems.
1
0
0
0
0
0
Beam Based RF Voltage Measurements and Longitudinal Beam Tomography at the Fermilab Booster
Increasing proton beam power on neutrino production targets is one of the major goals of the Fermilab long term accelerator programs. In this effort, the Fermilab 8 GeV Booster synchrotron plays a critical role for at least the next two decades. Therefore, understanding the Booster in great detail is important as we continue to improve its performance. For example, it is important to know accurately the available RF power in the Booster by carrying out beam-based measurements in order to specify the needed upgrades to the Booster RF system. Since the Booster magnetic field is changing continuously measuring/calibrating the RF voltage is not a trivial task. Here, we present a beam based method for the RF voltage measurements. Data analysis is carried out using computer programs developed in Python and MATLAB. The method presented here is applicable to any RCS which do not have flat-bottom and flat-top in the acceleration magnetic ramps. We have also carried out longitudinal beam tomography at injection and extraction energies with the data used for RF voltage measurements. Beam based RF voltage measurements and beam tomography were never done before for the Fermilab Booster. The results from these investigations will be very useful in future intensity upgrades.
0
1
0
0
0
0
A framework for quantitative modeling and analysis of highly (re)configurable systems
This paper presents our approach to the quantitative modeling and analysis of highly (re)configurable systems, such as software product lines. Different combinations of the optional features of such a system give rise to combinatorially many individual system variants. We use a formal modeling language that allows us to model systems with probabilistic behavior, possibly subject to quantitative feature constraints, and able to dynamically install, remove or replace features. More precisely, our models are defined in the probabilistic feature-oriented language QFLAN, a rich domain specific language (DSL) for systems with variability defined in terms of features. QFLAN specifications are automatically encoded in terms of a process algebra whose operational behavior interacts with a store of constraints, and hence allows to separate system configuration from system behavior. The resulting probabilistic configurations and behavior converge seamlessly in a semantics based on discrete-time Markov chains, thus enabling quantitative analysis. Our analysis is based on statistical model checking techniques, which allow us to scale to larger models with respect to precise probabilistic analysis techniques. The analyses we can conduct range from the likelihood of specific behavior to the expected average cost, in terms of feature attributes, of specific system variants. Our approach is supported by a novel Eclipse-based tool which includes state-of-the-art DSL utilities for QFLAN based on the Xtext framework as well as analysis plug-ins to seamlessly run statistical model checking analyses. We provide a number of case studies that have driven and validated the development of our framework.
1
0
0
0
0
0
Generating large misalignments in gapped and binary discs
Many protostellar gapped and binary discs show misalignments between their inner and outer discs; in some cases, $\sim70$ degree misalignments have been observed. Here we show that these misalignments can be generated through a "secular precession resonance" between the nodal precession of the inner disc and the precession of the gap-opening (stellar or massive planetary) companion. An evolving protostellar system may naturally cross this resonance during its lifetime due to disc dissipation and/or companion migration. If resonance crossing occurs on the right timescale, of order a few Myrs, characteristic for young protostellar systems, the inner and outer discs can become highly misaligned ($\gtrsim 60$ degrees). When the primary star has a mass of order a solar mass, generating a significant misalignment typically requires the companion to have a mass of $\sim 0.01-0.1$ M$_\odot$ and an orbital separation of tens of AU. The recently observed companion in the cavity of the gapped, highly misaligned system HD 142527 satisfies these requirements, indicating that a previous resonance crossing event misaligned the inner and outer discs. Our scenario for HD 142527's misaligned discs predicts that the companion's orbital plane is aligned with the outer disc's; this prediction should be testable with future observations as the companion's orbit is mapped out. Misalignments observed in several other gapped disc systems could be generated by the same secular resonance mechanism.
0
1
0
0
0
0
Bilinear approach to the supersymmetric Gardner equation
We study a supersymmetric version of the Gardner equation (both focusing and defocusing) using the superbilinear formalism. This equation is new and cannot be obtained from supersymmetric modified Korteweg-de Vries equation with a nonzero boundary condition. We construct supersymmetric solitons and then by passing to the long-wave limit in the focusing case obtain rational nonsingular solutions. We also discuss the supersymmetric version of the defocusing equation and the dynamics of its solutions.
0
1
0
0
0
0
The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching
Many augmented reality (AR) applications operate within near-field reaching distances, and require matching the depth of a virtual object with a real object. The accuracy of this matching was measured in three experiments, which examined the effect of focal distance, age, and brightness, within distances of 33.3 to 50 cm, using a custom-built AR haploscope. Experiment I examined the effect of focal demand, at the levels of collimated (infinite focal distance), consistent with other depth cues, and at the midpoint of reaching distance. Observers were too young to exhibit age-related reductions in accommodative ability. The depth matches of collimated targets were increasingly overestimated with increasing distance, consistent targets were slightly underestimated, and midpoint targets were accurately estimated. Experiment II replicated Experiment I, with older observers. Results were similar to Experiment I. Experiment III replicated Experiment I with dimmer targets, using young observers. Results were again consistent with Experiment I, except that both consistent and midpoint targets were accurately estimated. In all cases, collimated results were explained by a model, where the collimation biases the eyes' vergence angle outwards by a constant amount. Focal demand and brightness affect near-field AR depth matching, while age-related reductions in accommodative ability have no effect.
1
0
0
0
0
0
Small-amplitude steady water waves with critical layers: non-symmetric waves
The problem for two-dimensional steady water waves with vorticity is considered. Using methods of spatial dynamics, we reduce the problem to a finite dimensional Hamiltonian system. As an application, we prove the existence of non-symmetric steady water waves when the number of roots of the dispersion equation is greater than 1.
0
0
1
0
0
0
Emergence of Leadership in Communication
We study a neuro-inspired model that mimics a discussion (or information dissemination) process in a network of agents. During their interaction, agents redistribute activity and network weights, resulting in emergence of leader(s). The model is able to reproduce the basic scenarios of leadership known in nature and society: laissez-faire (irregular activity, weak leadership, sizable inter-follower interaction, autonomous sub-leaders); participative or democratic (strong leadership, but with feedback from followers); and autocratic (no feedback, one-way influence). Several pertinent aspects of these scenarios are found as well---e.g., hidden leadership (a hidden clique of agents driving the official autocratic leader), and successive leadership (two leaders influence followers by turns). We study how these scenarios emerge from inter-agent dynamics and how they depend on behavior rules of agents---in particular, on their inertia against state changes.
0
1
0
0
0
0
Throughput-Improving Control of Highways Facing Stochastic Perturbations
In this article, we study the problem of controlling a highway segment facing stochastic perturbations, such as recurrent incidents and moving bottlenecks. To model traffic flow under perturbations, we use the cell-transmission model with Markovian capacities. The control inputs are: (i) the inflows that are sent to various on-ramps to the highway (for managing traffic demand), and (ii) the priority levels assigned to the on-ramp traffic relative to the mainline traffic (for allocating highway capacity). The objective is to maximize the throughput while ensuring that on-ramp queues remain bounded in the long-run. We develop a computational approach to solving this stability-constrained, throughput-maximization problem. Firstly, we use the classical drift condition in stability analysis of Markov processes to derive a sufficient condition for boundedness of on-ramp queues. Secondly, we show that our control design problem can be formulated as a mixed integer program with linear or bilinear constraints, depending on the complexity of Lyapunov function involved in the stability condition. Finally, for specific types of capacity perturbations, we derive intuitive criteria for managing demand and/or selecting priority levels. These criteria suggest that inflows and priority levels should be determined simultaneously such that traffic queues are placed at locations that discharge queues fast. We illustrate the performance benefits of these criteria through a computational study of a segment on Interstate 210 in California, USA.
1
0
0
0
0
0
Deep Learning for Accelerated Ultrasound Imaging
In portable, 3-D, or ultra-fast ultrasound (US) imaging systems, there is an increasing demand to reconstruct high quality images from limited number of data. However, the existing solutions require either hardware changes or computationally expansive algorithms. To overcome these limitations, here we propose a novel deep learning approach that interpolates the missing RF data by utilizing the sparsity of the RF data in the Fourier domain. Extensive experimental results from sub-sampled RF data from a real US system confirmed that the proposed method can effectively reduce the data rate without sacrificing the image quality.
1
0
0
1
0
0
Adaptive Estimation for Nonlinear Systems using Reproducing Kernel Hilbert Spaces
This paper extends a conventional, general framework for online adaptive estimation problems for systems governed by unknown nonlinear ordinary differential equations. The central feature of the theory introduced in this paper represents the unknown function as a member of a reproducing kernel Hilbert space (RKHS) and defines a distributed parameter system (DPS) that governs state estimates and estimates of the unknown function. This paper 1) derives sufficient conditions for the existence and stability of the infinite dimensional online estimation problem, 2) derives existence and stability of finite dimensional approximations of the infinite dimensional approximations, and 3) determines sufficient conditions for the convergence of finite dimensional approximations to the infinite dimensional online estimates. A new condition for persistency of excitation in a RKHS in terms of its evaluation functionals is introduced in the paper that enables proof of convergence of the finite dimensional approximations of the unknown function in the RKHS. This paper studies two particular choices of the RKHS, those that are generated by exponential functions and those that are generated by multiscale kernels defined from a multiresolution analysis.
1
0
0
0
0
0
Weak saturation and weak amalgamation property
The two model-theoretic concepts of weak saturation and weak amalgamation property are studied in the context of accessible categories. We relate these two concepts providing sufficient conditions for existence and uniqueness of weakly saturated objects of an accessible category K. We discuss the implications of this fact in classical model theory.
0
0
1
0
0
0
Perturbation problems in homogenization of hamilton-jacobi equations
This paper is concerned with the behavior of the ergodic constant associated with convex and superlinear Hamilton-Jacobi equation in a periodic environment which is perturbed either by medium with increasing period or by a random Bernoulli perturbation with small parameter. We find a first order Taylor's expansion for the ergodic constant which depends on the dimension d. When d = 1 the first order term is non trivial, while for all d $\ge$ 2 it is always 0. Although such questions have been looked at in the context of linear uniformly elliptic homogenization, our results are the first of this kind in nonlinear settings. Our arguments, which rely on viscosity solutions and the weak KAM theory, also raise several new and challenging questions.
0
0
1
0
0
0
Chimera states: Effects of different coupling topologies
Collective behavior among coupled dynamical units can emerge in various forms as a result of different coupling topologies as well as different types of coupling functions. Chimera states have recently received ample attention as a fascinating manifestation of collective behavior, in particular describing a symmetry breaking spatiotemporal pattern where synchronized and desynchronized states coexist in a network of coupled oscillators. In this perspective, we review the emergence of different chimera states, focusing on the effects of different coupling topologies that describe the interaction network connecting the oscillators. We cover chimera states that emerge in local, nonlocal and global coupling topologies, as well as in modular, temporal and multilayer networks. We also provide an outline of challenges and directions for future research.
0
1
0
0
0
0
Human Activity Recognition using Recurrent Neural Networks
Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of these experiments show that the proposed approach outperforms the existing ones in terms of accuracy and performance.
0
0
0
1
0
0
Eigenvalues of elliptic operators with density
We consider eigenvalue problems for elliptic operators of arbitrary order $2m$ subject to Neumann boundary conditions on bounded domains of the Euclidean $N$-dimensional space. We study the dependence of the eigenvalues upon variations of mass density and in particular we discuss the existence and characterization of upper and lower bounds under both the condition that the total mass is fixed and the condition that the $L^{\frac{N}{2m}}$-norm of the density is fixed. We highlight that the interplay between the order of the operator and the space dimension plays a crucial role in the existence of eigenvalue bounds.
0
0
1
0
0
0
A projection pursuit framework for testing general high-dimensional hypothesis
This article develops a framework for testing general hypothesis in high-dimensional models where the number of variables may far exceed the number of observations. Existing literature has considered less than a handful of hypotheses, such as testing individual coordinates of the model parameter. However, the problem of testing general and complex hypotheses remains widely open. We propose a new inference method developed around the hypothesis adaptive projection pursuit framework, which solves the testing problems in the most general case. The proposed inference is centered around a new class of estimators defined as $l_1$ projection of the initial guess of the unknown onto the space defined by the null. This projection automatically takes into account the structure of the null hypothesis and allows us to study formal inference for a number of long-standing problems. For example, we can directly conduct inference on the sparsity level of the model parameters and the minimum signal strength. This is especially significant given the fact that the former is a fundamental condition underlying most of the theoretical development in high-dimensional statistics, while the latter is a key condition used to establish variable selection properties. Moreover, the proposed method is asymptotically exact and has satisfactory power properties for testing very general functionals of the high-dimensional parameters. The simulation studies lend further support to our theoretical claims and additionally show excellent finite-sample size and power properties of the proposed test.
0
0
1
1
0
0
On the impact of quantum computing technology on future developments in high-performance scientific computing
Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the 'new race to the moon'. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today's encryption techniques, which would have significant impact on our society. It is however quite likely that quantum computing has beneficial impact on many computational disciplines. In this article we describe our vision of future developments in scientific computing that would be enabled by the advent of software-programmable quantum computers. We thereby assume that quantum computers will form part of a hybrid accelerated computing platform like GPUs and co-processor cards do today. In particular, we address the potential of quantum algorithms to bring major breakthroughs in applied mathematics and its applications. Finally, we give several examples that demonstrate the possible impact of quantum-accelerated scientific computing on society.
1
0
0
0
0
0
Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing
Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoder-decoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.
1
0
0
0
0
0
Unifying Value Iteration, Advantage Learning, and Dynamic Policy Programming
Approximate dynamic programming algorithms, such as approximate value iteration, have been successfully applied to many complex reinforcement learning tasks, and a better approximate dynamic programming algorithm is expected to further extend the applicability of reinforcement learning to various tasks. In this paper we propose a new, robust dynamic programming algorithm that unifies value iteration, advantage learning, and dynamic policy programming. We call it generalized value iteration (GVI) and its approximated version, approximate GVI (AGVI). We show AGVI's performance guarantee, which includes performance guarantees for existing algorithms, as special cases. We discuss theoretical weaknesses of existing algorithms, and explain the advantages of AGVI. Numerical experiments in a simple environment support theoretical arguments, and suggest that AGVI is a promising alternative to previous algorithms.
1
0
0
1
0
0
Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting
Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines.
1
0
0
1
0
0
Machine Learning for Networking: Workflow, Advances and Opportunities
Recently, machine learning has been used in every possible field to leverage its amazing power. For a long time, the net-working and distributed computing system is the key infrastructure to provide efficient computational resource for machine learning. Networking itself can also benefit from this promising technology. This article focuses on the application of Machine Learning techniques for Networking (MLN), which can not only help solve the intractable old network questions but also stimulate new network applications. In this article, we summarize the basic workflow to explain how to apply the machine learning technology in the networking domain. Then we provide a selective survey of the latest representative advances with explanations on their design principles and benefits. These advances are divided into several network design objectives and the detailed information of how they perform in each step of MLN workflow is presented. Finally, we shed light on the new opportunities on networking design and community building of this new inter-discipline. Our goal is to provide a broad research guideline on networking with machine learning to help and motivate researchers to develop innovative algorithms, standards and frameworks.
1
0
0
0
0
0
The WAGGS project - I. The WiFeS Atlas of Galactic Globular cluster Spectra
We present the WiFeS Atlas of Galactic Globular cluster Spectra, a library of integrated spectra of Milky Way and Local Group globular clusters. We used the WiFeS integral field spectrograph on the Australian National University 2.3 m telescope to observe the central regions of 64 Milky Way globular clusters and 22 globular clusters hosted by the Milky Way's low mass satellite galaxies. The spectra have wider wavelength coverage (3300 {\AA} to 9050 {\AA}) and higher spectral resolution (R = 6800) than existing spectral libraries of Milky Way globular clusters. By including Large and Small Magellanic Cloud star clusters, we extend the coverage of parameter space of existing libraries towards young and intermediate ages. While testing stellar population synthesis models and analysis techniques is the main aim of this library, the observations may also further our understanding of the stellar populations of Local Group globular clusters and make possible the direct comparison of extragalactic globular cluster integrated light observations with well understood globular clusters in the Milky Way. The integrated spectra are publicly available via the project website.
0
1
0
0
0
0
Going Viral: Stability of Consensus-Driven Adoptive Spread
The spread of new products in a networked population is often modeled as an epidemic. However, in the case of "complex" contagion, these models are insufficient to properly model adoption behavior. In this paper, we investigate a model of complex contagion which allows a coevolutionary interplay between adoption, modeled as an SIS epidemic spreading process, and social reinforcement effects, modeled as consensus opinion dynamics. Asymptotic stability analysis of the all-adopt as well as the none-adopt equilibria of the combined opinion-adoption model is provided through the use of Lyapunov arguments. In doing so, sufficient conditions are provided which determine the stability of the "flop" state, where no one adopts the product and everyone's opinion of the product is least favorable, and the "hit" state, where everyone adopts and their opinions are most favorable. These conditions are shown to extend to the bounded confidence opinion dynamic under a stronger assumption on the model parameters. To conclude, numerical simulations demonstrate behavior of the model which reflect findings from the sociology literature on adoption behavior.
1
0
0
0
0
0
What's In A Patch, I: Tensors, Differential Geometry and Statistical Shading Analysis
We develop a linear algebraic framework for the shape-from-shading problem, because tensors arise when scalar (e.g. image) and vector (e.g. surface normal) fields are differentiated multiple times. The work is in two parts. In this first part we investigate when image derivatives exhibit invariance to changing illumination by calculating the statistics of image derivatives under general distributions on the light source. We computationally validate the hypothesis that image orientations (derivatives) provide increased invariance to illumination by showing (for a Lambertian model) that a shape-from-shading algorithm matching gradients instead of intensities provides more accurate reconstructions when illumination is incorrectly estimated under a flatness prior.
1
0
0
0
0
0
Counterexample to Gronwall's Conjecture
We present a projectively invariant description of planar linear 3-webs and construct a counterexample to Gronwall's conjecture.
0
0
1
0
0
0
Urban Swarms: A new approach for autonomous waste management
Modern cities are growing ecosystems that face new challenges due to the increasing population demands. One of the many problems they face nowadays is waste management, which has become a pressing issue requiring new solutions. Swarm robotics systems have been attracting an increasing amount of attention in the past years and they are expected to become one of the main driving factors for innovation in the field of robotics. The research presented in this paper explores the feasibility of a swarm robotics system in an urban environment. By using bio-inspired foraging methods such as multi-place foraging and stigmergy-based navigation, a swarm of robots is able to improve the efficiency and autonomy of the urban waste management system in a realistic scenario. To achieve this, a diverse set of simulation experiments was conducted using real-world GIS data and implementing different garbage collection scenarios driven by robot swarms. Results presented in this research show that the proposed system outperforms current approaches. Moreover, results not only show the efficiency of our solution, but also give insights about how to design and customize these systems.
1
0
0
0
0
0
An Algorithm of Parking Planning for Smart Parking System
There are so many vehicles in the world and the number of vehicles is increasing rapidly. To alleviate the parking problems caused by that, the smart parking system has been developed. The parking planning is one of the most important parts of it. An effective parking planning strategy makes the better use of parking resources possible. In this paper, we present a feasible method to do parking planning. We transform the parking planning problem into a kind of linear assignment problem. We take vehicles as jobs and parking spaces as agents. We take distances between vehicles and parking spaces as costs for agents doing jobs. Then we design an algorithm for this particular assignment problem and solve the parking planning problem. The method proposed can give timely and efficient guide information to vehicles for a real time smart parking system. Finally, we show the effectiveness of the method with experiments over some data, which can simulate the situation of doing parking planning in the real world.
1
0
0
0
0
0
Classical Control, Quantum Circuits and Linear Logic in Enriched Category Theory
We describe categorical models of a circuit-based (quantum) functional pro- gramming language. We show that enriched categories play a crucial role. Following earlier work on QWire by Paykin et al., we consider both a simple first-order linear language for circuits, and a more powerful host language, such that the circuit language is embedded inside the host language. Our categorical semantics for the host language is standard, and involves cartesian closed categories and monads. We interpret the circuit language not in an ordinary category, but in a category that is enriched in the host category. We show that this structure is also related to linear/non-linear models. As an extended example, we recall an earlier result that the category of W*-algebras is dcpo-enriched, and we use this model to extend the circuit language with some recursive types.
1
0
1
0
0
0
Unifying Map and Landmark Based Representations for Visual Navigation
This works presents a formulation for visual navigation that unifies map based spatial reasoning and path planning, with landmark based robust plan execution in noisy environments. Our proposed formulation is learned from data and is thus able to leverage statistical regularities of the world. This allows it to efficiently navigate in novel environments given only a sparse set of registered images as input for building representations for space. Our formulation is based on three key ideas: a learned path planner that outputs path plans to reach the goal, a feature synthesis engine that predicts features for locations along the planned path, and a learned goal-driven closed loop controller that can follow plans given these synthesized features. We test our approach for goal-driven navigation in simulated real world environments and report performance gains over competitive baseline approaches.
1
0
0
0
0
0
Accounting for Uncertainty About Past Values In Probabilistic Projections of the Total Fertility Rate for All Countries
Since the 1940s, population projections have in most cases been produced using the deterministic cohort component method. However, in 2015, for the first time, in a major advance, the United Nations issued official probabilistic population projections for all countries based on Bayesian hierarchical models for total fertility and life expectancy. The estimates of these models and the resulting projections are conditional on the UN's official estimates of past values. However, these past values are themselves uncertain, particularly for the majority of the world's countries that do not have longstanding high-quality vital registration systems, when they rely on surveys and censuses with their own biases and measurement errors. This paper is a first attempt to remedy this for total fertility rates, by extending the UN model for the future to take account of uncertainty about past values. This is done by adding an additional level to the hierarchical model to represent the multiple data sources, in each case estimating their bias and measurement error variance. We assess the method by out-of-sample predictive validation. While the prediction intervals produced by the current method have somewhat less than nominal coverage, we find that our proposed method achieves close to nominal coverage. The prediction intervals become wider for countries for which the estimates of past total fertility rates rely heavily on surveys rather than on vital registration data.
0
0
0
1
0
0
Periodic solutions of a perturbed Kepler problem in the plane: from existence to stability
The existence of elliptic periodic solutions of a perturbed Kepler problem is proved. The equations are in the plane and the perturbation depends periodically on time. The proof is based on a local description of the symplectic group in two degrees of freedom.
0
0
1
0
0
0
Removal of Narrowband Interference (PLI in ECG Signal) Using Ramanujan Periodic Transform (RPT)
Suppression of interference from narrowband frequency signals play vital role in many signal processing and communication applications. A transform based method for suppression of narrow band interference in a biomedical signal is proposed. As a specific example Electrocardiogram (ECG) is considered for the analysis. ECG is one of the widely used biomedical signal. ECG signal is often contaminated with baseline wander noise, powerline interference (PLI) and artifacts (bioelectric signals), which complicates the processing of raw ECG signal. This work proposes an approach using Ramanujan periodic transform for reducing PLI and is tested on a subject data from MIT-BIH Arrhythmia database. A sum ($E$) of Euclidean error per block ($e_i$) is used as measure to quantify the suppression capability of RPT and notch filter based methods. The transformation is performed for different lengths ($N$), namely $36$, $72$, $108$, $144$, $180$. Every doubling of $N$-points results in $50{\%}$ reduction in error ($E$).
0
0
1
1
0
0
Global regularity and fast small scale formation for Euler patch equation in a disk
It is well known that the Euler vortex patch in $\mathbb{R}^{2}$ will remain regular if it is regular enough initially. In bounded domains, the regularity theory for patch solutions is less complete. We study here the Euler vortex patch in a disk. We prove global in time regularity by providing the upper bound of the growth of curvature of the patch boundary. For a special symmetric scenario, we construct an example of double exponential curvature growth, showing that such upper bound is qualitatively sharp.
0
0
1
0
0
0
Effective interaction in a non-Fermi liquid conductor and spin correlations in under-doped cuprates
The effective interaction between the itinerant spin degrees of freedom in the paramagnetic phases of hole doped quantum Heisenberg antiferromagnets is investigated theoretically, based on the single-band t-J model on 1D lattice, at zero temperature. The effective spin-spin interaction for this model in the strong correlation limit, is studied in terms of the generalized spin stiffness constant as a function of doping concentration. The plot of this generalized spin stiffness constant against doping shows a very high value of stiffness in the vicinity of zero doping and a very sharp fall with increase in doping concentration, signifying the rapid decay of original coupling of semi-localized spins in the system. Quite interestingly, this plot also shows a maximum occurring at a finite value of doping, which strongly suggests the tendency of the itinerant spins to couple again in the unconventional paramagnetic phase. As the doping is further increased, this new coupling is also suppressed and the spin response becomes analogous to almost Pauli-like. The last two predictions of ours are quite novel and may be directly tested by independent experiments and computational techniques in future. Our results in general receive good support from other theoretical works and experimental results extracted from the chains of YBa$_2$Cu$_3$O$_{6+x}$.
0
1
0
0
0
0
Planning with Verbal Communication for Human-Robot Collaboration
Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a robot to decide optimally between doing a task and issuing an utterance. We focus on two types of utterances: verbal commands, where the robot expresses how it wants its human teammate to behave, and state-conveying actions, where the robot explains why it is behaving this way. Human subject experiments show that enabling the robot to issue verbal commands is the most effective form of communicating objectives, while retaining user trust in the robot. Communicating why information should be done judiciously, since many participants questioned the truthfulness of the robot statements.
1
0
0
0
0
0
Magnon Condensation and Spin Superfluidity
We consider the phenomenon of Bose-Einstein condensation of quasi-equilibrium magnons which leads to a spin superfluidity, the coherent quantum transfer of magnetization in magnetic materials. These phenomena are beyond the classical Landau-Lifshitz-Gilbert paradigm. The critical conditions for excited magnon density for ferro- and antiferromagnets, bulk and thin films are estimated and discussed. The BEC should occur in the antiferromagnetic hematite at much lower excited magnon density compared to the ferromagnetic YIG.
0
1
0
0
0
0
Electroweak Vacuum Metastability and Low-scale Inflation
We study the stability of the electroweak vacuum in low-scale inflation models whose Hubble parameter is much smaller than the instability scale of the Higgs potential. In general, couplings between the inflaton and Higgs are present, and hence we study effects of these couplings during and after inflation. We derive constraints on the couplings between the inflaton and Higgs by requiring that they do not lead to catastrophic electroweak vacuum decay, in particular, via resonant production of the Higgs particles.
0
1
0
0
0
0
Spectral selectivity in capillary dye lasers
We explore the spectral properties of a capillary dye laser in the highly multimode regime. Our experiments indicate that the spectral behavior of the laser does not conform with a simple Fabry-Perot analysis; rather, it is strongly dictated by a Vernier resonant mechanism involving multiple modes, which propagate with different group velocities. The laser operates over a very broad spectral range and the Vernier effect gives rise to a free spectral range which is orders of magnitude larger than that expected from a simple Fabry-Perot mechanism. The presented theoretical calculations confirm the experimental results. Propagating modes of the capillary fiber are calculated using the finite element method (FEM) and it is shown that the optical pathlengths resulting from simultaneous beatings of these modes are in close agreement with the optical pathlengths directly extracted from the Fourier Transform of the experimentally measured laser emission spectra.
0
1
0
0
0
0
The Mechanism of Electrolyte Gating on High-Tc Cuprates: The Role of Oxygen Migration and Electrostatics
Electrolyte gating is widely used to induce large carrier density modulation on solid surfaces to explore various properties. Most of past works have attributed the charge modulation to electrostatic field effect. However, some recent reports have argued that the electrolyte gating effect in VO2, TiO2 and SrTiO3 originated from field-induced oxygen vacancy formation. This gives rise to a controversy about the gating mechanism, and it is therefore vital to reveal the relationship between the role of electrolyte gating and the intrinsic properties of materials. Here, we report entirely different mechanisms of electrolyte gating on two high-Tc cuprates, NdBa2Cu3O7-{\delta} (NBCO) and Pr2-xCexCuO4 (PCCO), with different crystal structures. We show that field-induced oxygen vacancy formation in CuO chains of NBCO plays the dominant role while it is mainly an electrostatic field effect in the case of PCCO. The possible reason is that NBCO has mobile oxygen in CuO chains while PCCO does not. Our study helps clarify the controversy relating to the mechanism of electrolyte gating, leading to a better understanding of the role of oxygen electro migration which is very material specific.
0
1
0
0
0
0
Invariant theory of a special group action on irreducible polynomials over finite fields
In the past few years, an action of $\mathrm{PGL}_2(\mathbb F_q)$ on the set of irreducible polynomials in $\mathbb F_q[x]$ has been introduced and many questions have been discussed, such as the characterization and number of invariant elements. In this paper, we analyze some recent works on this action and provide full generalizations of them, yielding final theoretical results on the characterization and number of invariant elements.
0
0
1
0
0
0
TIP: Typifying the Interpretability of Procedures
We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking them to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability. Finally, principled interpretable strategies are proposed and empirically evaluated on synthetic data, as well as on the largest public olfaction dataset that was made recently available \cite{olfs}. We also experiment on MNIST with a simple target model and different oracle models of varying complexity. This leads to the insight that the improvement in the target model is not only a function of the oracle model's performance, but also its relative complexity with respect to the target model. Further experiments on CIFAR-10, a real manufacturing dataset and FICO dataset showcase the benefit of our methods over Knowledge Distillation when the target models are simple and the complex model is a neural network.
1
0
0
1
0
0
Differences in 1D electron plasma wake field acceleration in MeV versus GeV and linear versus blowout regimes
In some laboratory and most astrophysical situations plasma wake-field acceleration of electrons is one dimensional, i.e. variation transverse to the beam's motion can be ignored. Thus, one dimensional (1D), particle-in-cell (PIC), fully electromagnetic simulations of electron plasma wake field acceleration are conducted in order to study the differences in electron plasma wake field acceleration in MeV versus GeV and linear versus blowout regimes. First, we show that caution needs to be taken when using fluid simulations, as PIC simulations prove that an approximation for an electron bunch not to evolve in time for few hundred plasma periods only applies when it is sufficiently relativistic. This conclusion is true irrespective of the plasma temperature. We find that in the linear regime and GeV energies, the accelerating electric field generated by the plasma wake is similar to the linear and MeV regime. However, because GeV energy driving bunch stays intact for much longer time, the final acceleration energies are much larger in the GeV energies case. In the GeV energy range and blowout regime the wake's accelerating electric field is much larger in amplitude compared to the linear case and also plasma wake geometrical size is much larger. Thus, the correct positioning of the trailing bunch is needed to achieve the efficient acceleration. For the considered case, optimally there should be approximately $(90-100) c/\omega_{pe}$ distance between trailing and driving electron bunches in the GeV blowout regime.
0
1
0
0
0
0
Estimating network memberships by simplex vertex hunting
Consider an undirected mixed membership network with $n$ nodes and $K$ communities. For each node $1 \leq i \leq n$, we model the membership by $\pi_{i} = (\pi_{i}(1), \pi_{i}(2), \ldots$, $\pi_{i}(K))'$, where $\pi_{i}(k)$ is the probability that node $i$ belongs to community $k$, $1 \leq k \leq K$. We call node $i$ "pure" if $\pi_i$ is degenerate and "mixed" otherwise. The primary interest is to estimate $\pi_i$, $1 \leq i \leq n$. We model the adjacency matrix $A$ with a Degree Corrected Mixed Membership (DCMM) model. Let $\hat{\xi}_1, \hat{\xi}_2, \ldots, \hat{\xi}_K$ be the first $K$ eigenvectors of $A$. We define a matrix $\hat{R} \in \mathbb{R}^{n, K-1}$ by $\hat{R}(i,k) = \hat{\xi}_{k+1}(i)/\hat{\xi}_1(i)$, $1 \leq k \leq K-1$, $1 \leq i \leq n$. The matrix can be viewed as a distorted version of its non-stochastic counterpart $R \in \mathbb{R}^{n, K-1}$, which is unknown but contains all information we need for the memberships. We reveal an interesting insight: There is a simplex ${\cal S}$ in $\mathbb{R}^{K-1}$ such that row $i$ of $R$ corresponds to a vertex of ${\cal S}$ if node $i$ is pure, and corresponds to an interior point of ${\cal S}$ otherwise. Vertex Hunting (i.e., estimating the vertices of ${\cal S}$) is thus the key to our problem. The matrix $\hat{R}$ is a row-wise normalization on the matrix of eigenvectors $\hat{\Xi}=[\hat{\xi}_1,\ldots,\hat{\xi}_K]$, first proposed by Jin (2015). Alternatively, we may normalize $\hat{\Xi}$ by the row-wise $\ell^q$-norms (e.g., Supplement of Jin (2015)), but it won't give rise to a simplex so is less convenient. We propose a new approach $\textit{Mixed-SCORE}$ to estimating the memberships, at the heart of which is an easy-to-use Vertex Hunting algorithm. The approach is successfully applied to $4$ network data sets. We also derive the rate of convergence for Mixed-SCORE.
0
0
0
1
0
0
Non-singular Green's functions for the unbounded Poisson equation in one, two and three dimensions
In this paper, we derive the non-singular Green's functions for the unbounded Poisson equation in two and three dimensions using a spectral approach to regularize the homogeneous equation. The resulting Green's functions are relevant to applications which are restricted to a minimum resolved length scale (e.g. a mesh size h) and thus cannot handle the singular Green's function of the continuous Poisson equation. We furthermore derive the gradient vector of the regularized Green's function, as this is useful in applications where the Poisson equation represents potential functions of a vector field.
0
1
1
0
0
0
Evolutionary phases of gas-rich galaxies in a galaxy cluster at z=1.46
We report a survey of molecular gas in galaxies in the XMMXCS J2215.9-1738 cluster at $z=1.46$. We have detected emission lines from 17 galaxies within a radius of $R_{200}$ from the cluster center, in Band 3 data of the Atacama Large Millimeter/submillimeter Array (ALMA) with a coverage of 93 -- 95 GHz in frequency and 2.33 arcmin$^2$ in spatial direction. The lines are all identified as CO $J$=2--1 emission lines from cluster members at $z\sim1.46$ by their redshifts and the colors of their optical and near-infrared (NIR) counterparts. The line luminosities reach down to $L'_{\rm CO(2-1)}=4.5\times10^{9}$ K km s$^{-1}$ pc$^2$. The spatial distribution of galaxies with a detection of CO(2--1) suggests that they disappear from the very center of the cluster. The phase-space diagram showing relative velocity versus cluster-centric distance indicates that the gas-rich galaxies have entered the cluster more recently than the gas-poor star-forming galaxies and passive galaxies located in the virialized region of this cluster. The results imply that the galaxies have experienced ram-pressure stripping and/or strangulation during the course of infall towards the cluster center and then the molecular gas in the galaxies at the cluster center is depleted by star formation.
0
1
0
0
0
0
An iterative ensemble Kalman filter in presence of additive model error
The iterative ensemble Kalman filter (IEnKF) in a deterministic framework was introduced in Sakov et al. (2012) to extend the ensemble Kalman filter (EnKF) and improve its performance in mildly up to strongly nonlinear cases. However, the IEnKF assumes that the model is perfect. This assumption simplified the update of the system at a time different from the observation time, which made it natural to apply the IEnKF for smoothing. In this study, we generalise the IEnKF to the case of imperfect model with additive model error. The new method called IEnKF-Q conducts a Gauss-Newton minimisation in ensemble space. It combines the propagated analysed ensemble anomalies from the previous cycle and model noise ensemble anomalies into a single ensemble of anomalies, and by doing so takes an algebraic form similar to that of the IEnKF. The performance of the IEnKF-Q is tested in a number of experiments with the Lorenz-96 model, which show that the method consistently outperforms both the EnKF and the IEnKF naively modified to accommodate additive model noise.
0
1
0
1
0
0
A Bayesian algorithm for detecting identity matches and fraud in image databases
A statistical algorithm for categorizing different types of matches and fraud in image databases is presented. The approach is based on a generative model of a graph representing images and connections between pairs of identities, trained using properties of a matching algorithm between images.
1
0
0
1
0
0