text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: "I can assure you [$\ldots$] that it's going to be all right" -- A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships, Abstract: As technology become more advanced, those who design, use and are otherwise affected by it want to know that it will perform correctly, and understand why it does what it does, and how to use it appropriately. In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust this technology. Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted. A survey of research that has been performed with respect to assurances is presented, and several key ideas are extracted in order to refine the definition of assurances. Several directions for future research are identified and discussed.
[ 1, 0, 0, 1, 0, 0 ]
Title: Study of Electro-Caloric Effect in Ca and Sn co-Doped BaTiO3 Ceramics, Abstract: The present work deals with the study of structural, ferroelectric, dielectric and electro-caloric effects in lead free ferroelectric polycrystalline Ba1-xCaxTi0.95Sn0.05O3 (x= 2, 5 and 10 %) i.e., Ca, Sn co-doped BaTiO3 (BTO). Phase purity of the samples is confirmed from X-ray data by using Rietveld refinement. 119Sn Mössbauer reveals homogenous phase as well as iso-valent substitution of Sn at Ti site. Enhancements in ferroelectric and dielectric properties have been observed. Indirect method which is based on Maxwell equation has been used to determine the electro-caloric (EC) effect in the studied ferroelectric ceramics and maximum EC coefficient is observed for Ba0.95Ca0.05Ti0.95Sn0.05O3.
[ 0, 1, 0, 0, 0, 0 ]
Title: Motivic Measures through Waldhausen K-Theories, Abstract: In this paper we introduce the notion of a $cdp$-functor to a Waldhausen category. We show that such functors admit extensions that satisfy the excision property, to which we associate Euler-Poincaré characteristics that send the class of a proper scheme to the class of its image. As an application, we show that the Yoneda embedding gives rise to a monoidal proper-fibred Waldhausen category over Noetherian schemes of finite Krull dimensions, with canonical $cdp$-functors to its fibres.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Brief Introduction to Machine Learning for Engineers, Abstract: This monograph aims at providing an introduction to key concepts, algorithms, and theoretical results in machine learning. The treatment concentrates on probabilistic models for supervised and unsupervised learning problems. It introduces fundamental concepts and algorithms by building on first principles, while also exposing the reader to more advanced topics with extensive pointers to the literature, within a unified notation and mathematical framework. The material is organized according to clearly defined categories, such as discriminative and generative models, frequentist and Bayesian approaches, exact and approximate inference, as well as directed and undirected models. This monograph is meant as an entry point for researchers with a background in probability and linear algebra.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Development of Microfluidic Systems within the Harrison Research Team, Abstract: D. Jed Harrison is a full professor at the Department of Chemistry at the University of Alberta. Here he describes the development of microfluidic techniques in his lab from the initial demonstration of an integrated separation system for samples in liquids to the recent development of methods to fabricate crystalline packed beds with very low defect density.
[ 0, 0, 0, 0, 1, 0 ]
Title: Landau phonon-roton theory revisited for superfluid helium 4 and Fermi gases, Abstract: Liquid helium and spin-1/2 cold-atom Fermi gases both exhibit in their superfluid phase two distinct types of excitations, gapless phonons and gapped rotons or fermionic pair-breaking excitations. In the long wavelength limit, revising and extending Landau and Khalatnikov's theory initially developed for helium [ZhETF 19, 637 (1949)], we obtain universal expressions for three- and four-body couplings among these two types of excitations. We calculate the corresponding phonon damping rates at low temperature and compare them to those of a pure phononic origin in high-pressure liquid helium and in strongly interacting Fermi gases, paving the way to experimental observations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Counting points on hyperelliptic curves with explicit real multiplication in arbitrary genus, Abstract: We present a probabilistic Las Vegas algorithm for computing the local zeta function of a genus-$g$ hyperelliptic curve defined over $\mathbb F_q$ with explicit real multiplication (RM) by an order $\Z[\eta]$ in a degree-$g$ totally real number field. It is based on the approaches by Schoof and Pila in a more favorable case where we can split the $\ell$-torsion into $g$ kernels of endomorphisms, as introduced by Gaudry, Kohel, and Smith in genus 2. To deal with these kernels in any genus, we adapt a technique that the author, Gaudry, and Spaenlehauer introduced to model the $\ell$-torsion by structured polynomial systems. Applying this technique to the kernels, the systems we obtain are much smaller and so is the complexity of solving them. Our main result is that there exists a constant $c>0$ such that, for any fixed $g$, this algorithm has expected time and space complexity $O((\log q)^{c})$ as $q$ grows and the characteristic is large enough. We prove that $c\le 8$ and we also conjecture that the result still holds for $c=6$.
[ 1, 0, 0, 0, 0, 0 ]
Title: Minimal surfaces and Schwarz lemma, Abstract: We prove a sharp Schwarz type inequality for the Weierstrass-Enneper representation of the minimal surfaces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Estimating activity cycles with probabilistic methods I. Bayesian Generalised Lomb-Scargle Periodogram with Trend, Abstract: Period estimation is one of the central topics in astronomical time series analysis, where data is often unevenly sampled. Especially challenging are studies of stellar magnetic cycles, as there the periods looked for are of the order of the same length than the datasets themselves. The datasets often contain trends, the origin of which is either a real long-term cycle or an instrumental effect, but these effects cannot be reliably separated, while they can lead to erroneous period determinations if not properly handled. In this study we aim at developing a method that can handle the trends properly, and by performing extensive set of testing, we show that this is the optimal procedure when contrasted with methods that do not include the trend directly to the model. The effect of the form of the noise (whether constant or heteroscedastic) on the results is also investigated. We introduce a Bayesian Generalised Lomb-Scargle Periodogram with Trend (BGLST), which is a probabilistic linear regression model using Gaussian priors for the coefficients and uniform prior for the frequency parameter. We show, using synthetic data, that when there is no prior information on whether and to what extent the true model of the data contains a linear trend, the introduced BGLST method is preferable to the methods which either detrend the data or leave the data untrended before fitting the periodic model. Whether to use noise with different than constant variance in the model depends on the density of the data sampling as well as on the true noise type of the process.
[ 0, 1, 0, 1, 0, 0 ]
Title: Mean Actor Critic, Abstract: We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.
[ 1, 0, 0, 1, 0, 0 ]
Title: HAWC Observations Strongly Favor Pulsar Interpretations of the Cosmic-Ray Positron Excess, Abstract: Recent measurements of the Geminga and B0656+14 pulsars by the gamma-ray telescope HAWC (along with earlier measurements by Milagro) indicate that these objects generate significant fluxes of very high-energy electrons. In this paper, we use the very high-energy gamma-ray intensity and spectrum of these pulsars to calculate and constrain their expected contributions to the local cosmic-ray positron spectrum. Among models that are capable of reproducing the observed characteristics of the gamma-ray emission, we find that pulsars invariably produce a flux of high-energy positrons that is similar in spectrum and magnitude to the positron fraction measured by PAMELA and AMS-02. In light of this result, we conclude that it is very likely that pulsars provide the dominant contribution to the long perplexing cosmic-ray positron excess.
[ 0, 1, 0, 0, 0, 0 ]
Title: Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning, Abstract: Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a source-target selective joint fine-tuning scheme for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our selective joint fine-tuning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, Oxford Flowers 102 and Stanford Dogs 120. In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Convex Feasible Set Algorithm for Real Time Optimization in Motion Planning, Abstract: With the development of robotics, there are growing needs for real time motion planning. However, due to obstacles in the environment, the planning problem is highly non-convex, which makes it difficult to achieve real time computation using existing non-convex optimization algorithms. This paper introduces the convex feasible set algorithm (CFS) which is a fast algorithm for non-convex optimization problems that have convex costs and non-convex constraints. The idea is to find a convex feasible set for the original problem and iteratively solve a sequence of subproblems using the convex constraints. The feasibility and the convergence of the proposed algorithm are proved in the paper. The application of this method on motion planning for mobile robots is discussed. The simulations demonstrate the effectiveness of the proposed algorithm.
[ 1, 0, 0, 0, 0, 0 ]
Title: Continuous-Time Visual-Inertial Odometry for Event Cameras, Abstract: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, due to the fundamentally different structure of the sensor's output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. Recent work has shown that a continuous-time representation of the event camera pose can deal with the high temporal resolution and asynchronous nature of this sensor in a principled way. In this paper, we leverage such a continuous-time representation to perform visual-inertial odometry with an event camera. This representation allows direct integration of the asynchronous events with micro-second accuracy and the inertial measurements at high frequency. The event camera trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines. This formulation significantly reduces the number of variables in trajectory estimation problems. We evaluate our method on real data from several scenes and compare the results against ground truth from a motion-capture system. We show that our method provides improved accuracy over the result of a state-of-the-art visual odometry method for event cameras. We also show that both the map orientation and scale can be recovered accurately by fusing events and inertial data. To the best of our knowledge, this is the first work on visual-inertial fusion with event cameras using a continuous-time framework.
[ 1, 0, 0, 0, 0, 0 ]
Title: Reconstructing a $f(R)$ theory from the $α$-Attractors, Abstract: We show an analogy at high curvature between a $f(R) = R + aR^{n - 1} + bR^2$ theory and the $\alpha$-Attractors. We calculate the expressions of the parameters $a$, $b$ and $n$ as functions of $\alpha$ and the predictions of the model $f(R) = R + aR^{n - 1} + bR^2$ on the scalar spectral index $n_{\rm s}$ and the tensor-to-scalar ratio $r$. We find that the power law correction $R^{n - 1}$ allows for a production of gravitational waves enhanced with respect to the one in the Starobinsky model, while maintaining a viable prediction on $n_{\rm s}$. We numerically reconstruct the full $\alpha$-Attractors class of models testing the goodness of our high-energy approximation $f(R) = R + aR^{n - 1} + bR^2$. Moreover, we also investigate the case of a single power law $f(R) = \gamma R^{2 - \delta}$ theory, with $\gamma$ and $\delta$ free parameters. We calculate analytically the predictions of this model on the scalar spectral index $n_{\rm s}$ and the tensor-to-scalar ratio $r$ and the values of $\delta$ which are allowed from the current observational results. We find that $-0.015 < \delta < 0.016$, confirming once again the excellent agreement between the Starobinsky model and observation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Systems of ergodic BSDE arising in regime switching forward performance processes, Abstract: We introduce and solve a new type of quadratic backward stochastic differential equation systems defined in an infinite time horizon, called \emph{ergodic BSDE systems}. Such systems arise naturally as candidate solutions to characterize forward performance processes and their associated optimal trading strategies in a regime switching market. In addition, we develop a connection between the solution of the ergodic BSDE system and the long-term growth rate of classical utility maximization problems, and use the ergodic BSDE system to study the large time behavior of PDE systems with quadratic growth Hamiltonians.
[ 0, 0, 0, 0, 0, 1 ]
Title: Extraction and Classification of Diving Clips from Continuous Video Footage, Abstract: Due to recent advances in technology, the recording and analysis of video data has become an increasingly common component of athlete training programmes. Today it is incredibly easy and affordable to set up a fixed camera and record athletes in a wide range of sports, such as diving, gymnastics, golf, tennis, etc. However, the manual analysis of the obtained footage is a time-consuming task which involves isolating actions of interest and categorizing them using domain-specific knowledge. In order to automate this kind of task, three challenging sub-problems are often encountered: 1) temporally cropping events/actions of interest from continuous video; 2) tracking the object of interest; and 3) classifying the events/actions of interest. Most previous work has focused on solving just one of the above sub-problems in isolation. In contrast, this paper provides a complete solution to the overall action monitoring task in the context of a challenging real-world exemplar. Specifically, we address the problem of diving classification. This is a challenging problem since the person (diver) of interest typically occupies fewer than 1% of the pixels in each frame. The model is required to learn the temporal boundaries of a dive, even though other divers and bystanders may be in view. Finally, the model must be sensitive to subtle changes in body pose over a large number of frames to determine the classification code. We provide effective solutions to each of the sub-problems which combine to provide a highly functional solution to the task as a whole. The techniques proposed can be easily generalized to video footage recorded from other sports.
[ 1, 0, 0, 0, 0, 0 ]
Title: The inertial Jacquet-Langlands correspondence, Abstract: We give a parametrization of the simple Bernstein components of inner forms of a general linear group over a local field by invariants constructed from type theory, and explicitly describe its behaviour under the Jacquet-Langlands correspondence. Along the way, we prove a conjecture of Broussous, Sécherre and Stevens on preservation of endo-classes.
[ 0, 0, 1, 0, 0, 0 ]
Title: Universal geometric constraints during epithelial jamming, Abstract: As an injury heals, an embryo develops, or a carcinoma spreads, epithelial cells systematically change their shape. In each of these processes cell shape is studied extensively, whereas variation of shape from cell-to-cell is dismissed most often as biological noise. But where do cell shape and variation of cell shape come from? Here we report that cell shape and shape variation are mutually constrained through a relationship that is purely geometrical. That relationship is shown to govern maturation of the pseudostratified bronchial epithelial layer cultured from both non-asthmatic and asthmatic donors as well as formation of the ventral furrow in the epithelial monolayer of the Drosophila embryo in vivo. Across these and other vastly different epithelial systems, cell shape variation collapses to a family of distributions that is common to all and potentially universal. That distribution, in turn, is accounted for quantitatively by a mechanistic theory of cell-cell interaction showing that cell shape becomes progressively less elongated and less variable as the layer becomes progressively more jammed. These findings thus uncover a connection between jamming and geometry that is generic -spanning jammed living and inert systems alike- and demonstrate that proximity of the cell layer to the jammed state is the principal determinant of the most primitive features of epithelial cell shape and shape variation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Semi-independent resampling for particle filtering, Abstract: Among Sequential Monte Carlo (SMC) methods,Sampling Importance Resampling (SIR) algorithms are based on Importance Sampling (IS) and on some resampling-based)rejuvenation algorithm which aims at fighting against weight degeneracy. However %whichever the resampling technique used this mechanism tends to be insufficient when applied to informative or high-dimensional models. In this paper we revisit the rejuvenation mechanism and propose a class of parameterized SIR-based solutions which enable to adjust the tradeoff between computational cost and statistical performances.
[ 0, 0, 0, 1, 0, 0 ]
Title: Free Information Flow Benefits Truth Seeking, Abstract: How can we approach the truth in a society? It may depend on various factors. In this paper, using a well-established truth seeking model, we show that the persistent free information flow will bring us to the truth. Here the free information flow is modeled as the environmental random noise that could alter one's cognition. Without the random noise, the model predicts that the truth can only be captured by the truth seekers who own active perceptive ability of the truth and their believers, while the other individuals may stick to falsehood. But under the influence of the random noise, we strictly prove that even there is only one truth seeker in the group, all individuals will finally approach the truth.
[ 1, 1, 1, 0, 0, 0 ]
Title: Towards a Holistic Approach to Designing Theory-based Mobile Health Interventions, Abstract: Increasing evidence has shown that theory-based health behavior change interventions are more effective than non-theory-based ones. However, only a few segments of relevant studies were theory-based, especially the studies conducted by non-psychology researchers. On the other hand, many mobile health interventions, even those based on the behavioral theories, may still fail in the absence of a user-centered design process. The gap between behavioral theories and user-centered design increases the difficulty of designing and implementing mobile health interventions. To bridge this gap, we propose a holistic approach to designing theory-based mobile health interventions built on the existing theories and frameworks of three categories: (1) behavioral theories (e.g., the Social Cognitive Theory, the Theory of Planned Behavior, and the Health Action Process Approach), (2) the technological models and frameworks (e.g., the Behavior Change Techniques, the Persuasive System Design and Behavior Change Support System, and the Just-in-Time Adaptive Interventions), and (3) the user-centered systematic approaches (e.g., the CeHRes Roadmap, the Wendel's Approach, and the IDEAS Model). This holistic approach provides researchers a lens to see the whole picture for developing mobile health interventions.
[ 1, 0, 0, 0, 0, 0 ]
Title: Connections between transport of intensity equation and two-dimensional phase unwrapping, Abstract: In a recent publication [Appl. Opt. 55, 2418 (2016)], a method for two-dimensional phase unwrapping based on the transport of intensity equation (TIE) was studied. We wish to show that this approach is associated with the standard least squares phase unwrapping algorithm, but with additional numerical errors.
[ 0, 1, 0, 0, 0, 0 ]
Title: On approximations of Value at Risk and Expected Shortfall involving kurtosis, Abstract: We derive new approximations for the Value at Risk and the Expected Shortfall at high levels of loss distributions with positive skewness and excess kurtosis, and we describe their precisions for notable ones such as for exponential, Pareto type I, lognormal and compound (Poisson) distributions. Our approximations are motivated by extensions of the so-called Normal Power Approximation, used for approximating the cumulative distribution function of a random variable, incorporating not only the skewness but the kurtosis of the random variable in question as well. We show the performance of our approximations in numerical examples and we also give comparisons with some known ones in the literature.
[ 0, 0, 0, 0, 0, 1 ]
Title: DeepDiff: Deep-learning for predicting Differential gene expression from histone modifications, Abstract: Computational methods that predict differential gene expression from histone modification signals are highly desirable for understanding how histone modifications control the functional heterogeneity of cells through influencing differential gene regulation. Recent studies either failed to capture combinatorial effects on differential prediction or primarily only focused on cell type-specific analysis. In this paper, we develop a novel attention-based deep learning architecture, DeepDiff, that provides a unified and end-to-end solution to model and to interpret how dependencies among histone modifications control the differential patterns of gene regulation. DeepDiff uses a hierarchy of multiple Long short-term memory (LSTM) modules to encode the spatial structure of input signals and to model how various histone modifications cooperate automatically. We introduce and train two levels of attention jointly with the target prediction, enabling DeepDiff to attend differentially to relevant modifications and to locate important genome positions for each modification. Additionally, DeepDiff introduces a novel deep-learning based multi-task formulation to use the cell-type-specific gene expression predictions as auxiliary tasks, encouraging richer feature embeddings in our primary task of differential expression prediction. Using data from Roadmap Epigenomics Project (REMC) for ten different pairs of cell types, we show that DeepDiff significantly outperforms the state-of-the-art baselines for differential gene expression prediction. The learned attention weights are validated by observations from previous studies about how epigenetic mechanisms connect to differential gene expression. Codes and results are available at \url{deepchrome.org}
[ 0, 0, 0, 1, 0, 0 ]
Title: t-SNE-CUDA: GPU-Accelerated t-SNE and its Applications to Modern Data, Abstract: Modern datasets and models are notoriously difficult to explore and analyze due to their inherent high dimensionality and massive numbers of samples. Existing visualization methods which employ dimensionality reduction to two or three dimensions are often inefficient and/or ineffective for these datasets. This paper introduces t-SNE-CUDA, a GPU-accelerated implementation of t-distributed Symmetric Neighbor Embedding (t-SNE) for visualizing datasets and models. t-SNE-CUDA significantly outperforms current implementations with 50-700x speedups on the CIFAR-10 and MNIST datasets. These speedups enable, for the first time, visualization of the neural network activations on the entire ImageNet dataset - a feat that was previously computationally intractable. We also demonstrate visualization performance in the NLP domain by visualizing the GloVe embedding vectors. From these visualizations, we can draw interesting conclusions about using the L2 metric in these embedding spaces. t-SNE-CUDA is publicly available atthis https URL
[ 1, 0, 0, 1, 0, 0 ]
Title: Coma Cluster Ultra-Diffuse Galaxies Are Not Standard Radio Galaxies, Abstract: Matching members in the Coma cluster catalogue of ultra-diffuse galaxies (UDGs, Yagi et al. 2016) from SUBARU imaging with a very deep radio continuum survey source catalogue of the cluster (Miller et al. 2009) using the Karl G. Jansky Very Large Array (VLA) within a rectangular region of ~ 1.19 square degrees centred on the cluster core reveals matches consistent with random. An overlapping set of 470 UDGs and 696 VLA radio sources in this rectangular area finds 33 matches within a separation of 25 arcsec; dividing the sample into bins with separations bounded by 5 arcsec, 10 arcsec, 20 arcsec and 25 arcsec finds 1, 4, 17 and 11 matches. An analytical model estimate, based on the Poisson probability distribution, of the number of randomly expected matches within these same separation bounds is 1.7, 4.9, 19.4 and 14.2, each respectively consistent with the 95 percent Poisson confidence intervals of the observed values. Dividing the data into five clustercentric annuli of 0.1 degree, and into the four separation bins, finds the same result. This random match of UDGs with VLA sources implies that UDGs are not radio galaxies by the standard definition. Those VLA sources having integrated flux > 1 mJy at 1.4 GHz in Miller et al. (2009) without SDSS galaxy matches are consistent with the known surface density of background radio sources. We briefly explore the possibility that some unresolved VLA sources near UDGs could be young, compact, bright, supernova remnants of type Ia events, possibly in the intracluster volume.
[ 0, 1, 0, 0, 0, 0 ]
Title: LD-SDS: Towards an Expressive Spoken Dialogue System based on Linked-Data, Abstract: In this work we discuss the related challenges and describe an approach towards the fusion of state-of-the-art technologies from the Spoken Dialogue Systems (SDS) and the Semantic Web and Information Retrieval domains. We envision a dialogue system named LD-SDS that will support advanced, expressive, and engaging user requests, over multiple, complex, rich, and open-domain data sources that will leverage the wealth of the available Linked Data. Specifically, we focus on: a) improving the identification, disambiguation and linking of entities occurring in data sources and user input; b) offering advanced query services for exploiting the semantics of the data, with reasoning and exploratory capabilities; and c) expanding the typical information seeking dialogue model (slot filling) to better reflect real-world conversational search scenarios.
[ 1, 0, 0, 0, 0, 0 ]
Title: On a spiked model for large volatility matrix estimation from noisy high-frequency data, Abstract: Recently, inference about high-dimensional integrated covariance matrices (ICVs) based on noisy high-frequency data has emerged as a challenging problem. In the literature, a pre-averaging estimator (PA-RCov) is proposed to deal with the microstructure noise. Using the large-dimensional random matrix theory, it has been established that the eigenvalue distribution of the PA-RCov matrix is intimately linked to that of the ICV through the Marcenko-Pastur equation. Consequently, the spectrum of the ICV can be inferred from that of the PA-RCov. However, extensive data analyses demonstrate that the spectrum of the PA-RCov is spiked, that is, a few large eigenvalues (spikes) stay away from the others which form a rather continuous distribution with a density function (bulk). Therefore, any inference on the ICVs must take into account this spiked structure. As a methodological contribution, we propose a spiked model for the ICVs where spikes can be inferred from those of the available PA-RCov matrices. The consistency of the inference procedure is established and is checked by extensive simulation studies. In addition, we apply our method to the real data from the US and Hong Kong markets. It is found that our model clearly outperforms the existing one in predicting the existence of spikes and in mimicking the empirical PA-RCov matrices.
[ 0, 0, 0, 1, 0, 0 ]
Title: Graph-based Features for Automatic Online Abuse Detection, Abstract: While online communities have become increasingly important over the years, the moderation of user-generated content is still performed mostly manually. Automating this task is an important step in reducing the financial cost associated with moderation, but the majority of automated approaches strictly based on message content are highly vulnerable to intentional obfuscation. In this paper, we discuss methods for extracting conversational networks based on raw multi-participant chat logs, and we study the contribution of graph features to a classification system that aims to determine if a given message is abusive. The conversational graph-based system yields unexpectedly high performance , with results comparable to those previously obtained with a content-based approach.
[ 1, 0, 0, 0, 0, 0 ]
Title: All Classical Adversary Methods are Equivalent for Total Functions, Abstract: We show that all known classical adversary lower bounds on randomized query complexity are equivalent for total functions, and are equal to the fractional block sensitivity $\text{fbs}(f)$. That includes the Kolmogorov complexity bound of Laplante and Magniez and the earlier relational adversary bound of Aaronson. This equivalence also implies that for total functions, the relational adversary is equivalent to a simpler lower bound, which we call rank-1 relational adversary. For partial functions, we show unbounded separations between $\text{fbs}(f)$ and other adversary bounds, as well as between the adversary bounds themselves. We also show that, for partial functions, fractional block sensitivity cannot give lower bounds larger than $\sqrt{n \cdot \text{bs}(f)}$, where $n$ is the number of variables and $\text{bs}(f)$ is the block sensitivity. Then we exhibit a partial function $f$ that matches this upper bound, $\text{fbs}(f) = \Omega(\sqrt{n \cdot \text{bs}(f)})$.
[ 1, 0, 0, 0, 0, 0 ]
Title: A new approach for short-spacing correction of radio interferometric data sets, Abstract: The short-spacing problem describes the inherent inability of radio-interferometric arrays to measure the integrated flux and structure of diffuse emission associated with extended sources. New interferometric arrays, such as SKA, require solutions to efficiently combine interferometer and single-dish data. We present a new and open source approach for merging single-dish and cleaned interferometric data sets requiring a minimum of data manipulation while offering a rigid flux determination and full high angular resolution. Our approach combines single-dish and cleaned interferometric data in the image domain. This approach is tested for both Galactic and extragalactic HI data sets. Furthermore, a quantitative comparison of our results to commonly used methods is provided. Additionally, for the interferometric data sets of NGC4214 and NGC5055, we study the impact of different imaging parameters as well as their influence on the combination for NGC4214. The approach does not require the raw data (visibilities) or any additional special information such as antenna patterns. This is advantageous especially in the light of upcoming radio surveys with heterogeneous antenna designs.
[ 0, 1, 0, 0, 0, 0 ]
Title: Data and uncertainty in extreme risks - a nonlinear expectations approach, Abstract: Estimation of tail quantities, such as expected shortfall or Value at Risk, is a difficult problem. We show how the theory of nonlinear expectations, in particular the Data-robust expectation introduced in [5], can assist in the quantification of statistical uncertainty for these problems. However, when we are in a heavy-tailed context (in particular when our data are described by a Pareto distribution, as is common in much of extreme value theory), the theory of [5] is insufficient, and requires an additional regularization step which we introduce. By asking whether this regularization is possible, we obtain a qualitative requirement for reliable estimation of tail quantities and risk measures, in a Pareto setting.
[ 0, 0, 1, 1, 0, 0 ]
Title: Flow equations for cold Bose gases, Abstract: We derive flow equations for cold atomic gases with one macroscopically populated energy level. The generator is chosen such that the ground state decouples from all other states in the system as the renormalization group flow progresses. We propose a self-consistent truncation scheme for the flow equations at the level of three-body operators and show how they can be used to calculate the ground state energy of a general $N$-body system. Moreover, we provide a general method to estimate the truncation error in the calculated energies. Finally, we test our scheme by benchmarking to the exactly solvable Lieb-Liniger model and find good agreement for weak and moderate interaction strengths.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fluid flows shaping organism morphology, Abstract: A dynamic self-organized morphology is the hallmark of network-shaped organisms like slime moulds and fungi. Organisms continuously re-organize their flexible, undifferentiated body plans to forage for food. Among these organisms the slime mould Physarum polycephalum has emerged as a model to investigate how organism can self-organize their extensive networks and act as a coordinated whole. Cytoplasmic fluid flows flowing through the tubular networks have been identified as key driver of morphological dynamics. Inquiring how fluid flows can shape living matter from small to large scales opens up many new avenues for research.
[ 0, 0, 0, 0, 1, 0 ]
Title: Frames of exponentials and sub-multitiles in LCA groups, Abstract: In this note we investigate the existence of frames of exponentials for $L^2(\Omega)$ in the setting of LCA groups. Our main result shows that sub-multitiling properties of $\Omega \subset \widehat{G}$ with respect to a uniform lattice $\Gamma$ of $\widehat{G}$ guarantee the existence of a frame of exponentials with frequencies in a finite number of translates of the annihilator of $\Gamma$. We also prove the converse of this result and provide conditions for the existence of these frames. These conditions extend recent results on Riesz bases of exponentials and multitilings to frames.
[ 0, 0, 1, 0, 0, 0 ]
Title: Exponential Random Graph Models with Big Networks: Maximum Pseudolikelihood Estimation and the Parametric Bootstrap, Abstract: With the growth of interest in network data across fields, the Exponential Random Graph Model (ERGM) has emerged as the leading approach to the statistical analysis of network data. ERGM parameter estimation requires the approximation of an intractable normalizing constant. Simulation methods represent the state-of-the-art approach to approximating the normalizing constant, leading to estimation by Monte Carlo maximum likelihood (MCMLE). MCMLE is accurate when a large sample of networks is used to approximate the normalizing constant. However, MCMLE is computationally expensive, and may be prohibitively so if the size of the network is on the order of 1,000 nodes (i.e., one million potential ties) or greater. When the network is large, one option is maximum pseudolikelihood estimation (MPLE). The standard MPLE is simple and fast, but generally underestimates standard errors. We show that a resampling method---the parametric bootstrap---results in accurate coverage probabilities for confidence intervals. We find that bootstrapped MPLE can be run in 1/5th the time of MCMLE. We study the relative performance of MCMLE and MPLE with simulation studies, and illustrate the two different approaches by applying them to a network of bills introduced in the United State Senate.
[ 0, 0, 0, 1, 0, 0 ]
Title: Critical behavior of a stochastic anisotropic Bak-Sneppen model, Abstract: In this paper we present our study on the critical behavior of a stochastic anisotropic Bak-Sneppen (saBS) model, in which a parameter $\alpha$ is introduced to describe the interaction strength among nearest species. We estimate the threshold fitness $f_c$ and the critical exponent $\tau_r$ by numerically integrating a master equation for the distribution of avalanche spatial sizes. Other critical exponents are then evaluated from previously known scaling relations. The numerical results are in good agreement with the counterparts yielded by the Monte Carlo simulations. Our results indicate that all saBS models with nonzero interaction strength exhibit self-organized criticality, and fall into the same universality class, by sharing the universal critical exponents.
[ 0, 1, 0, 0, 0, 0 ]
Title: Probability, Statistics and Planet Earth, I: Geotemporal covariances, Abstract: The study of covariances (or positive definite functions) on the sphere (the Earth, in our motivation) goes back to Bochner and Schoenberg (1940--42) and to the first author (1969, 1973), among others. Extending to the geotemporal case (sphere cross line, for position and time) was for a long time an obstacle to geostatistical modelling. The characterisation question here was raised by the authors and Mijatović in 2016, and answered by Berg and Porcu in 2017. Extensions to multiple products (of spheres and lines) follows similarly (Guella, Menegatto and Peron, 2016). We survey results of this type, and related applications e.g. in numerical weather prediction.
[ 0, 1, 0, 1, 0, 0 ]
Title: Gang-GC: Locality-aware Parallel Data Placement Optimizations for Key-Value Storages, Abstract: Many cloud applications rely on fast and non-relational storage to aid in the processing of large amounts of data. Managed runtimes are now widely used to support the execution of several storage solutions of the NoSQL movement, particularly when dealing with big data key-value store-driven applications. The benefits of these runtimes can however be limited by modern parallel throughput-oriented GC algorithms, where related objects have the potential to be dispersed in memory, either in the same or different generations. In the long run this causes more page faults and degradation of locality on system-level memory caches. We propose, Gang-CG, an extension to modern heap layouts and to a parallel GC algorithm to promote locality between groups of related objects. This is done without extensive profiling of the applications and in a way that is transparent to the programmer, without the need to use specialized data structures. The heap layout and algorithmic extensions were implemented over the Parallel Scavenge garbage collector of the HotSpot JVM\@. Using microbenchmarks that capture the architecture of several key-value stores databases, we show negligible overhead in frequent operations such as the allocation of new objects and improvements to the access speed of data, supported by lower misses in system-level memory caches. Overall, we show a 6\% improvement in the average time of read and update operations and an average decrease of 12.4\% in page faults.
[ 1, 0, 0, 0, 0, 0 ]
Title: Good cyclic codes and the uncertainty principle, Abstract: A long standing problem in the area of error correcting codes asks whether there exist good cyclic codes. Most of the known results point in the direction of a negative answer. The uncertainty principle is a classical result of harmonic analysis asserting that given a non-zero function $f$ on some abelian group, either $f$ or its Fourier transform $\hat{f}$ has large support. In this note, we observe a connection between these two subjects. We point out that even a weak version of the uncertainty principle for fields of positive characteristic would imply that good cyclic codes do exist. We also provide some heuristic arguments supporting that this is indeed the case.
[ 1, 0, 1, 0, 0, 0 ]
Title: An Amateur Drone Surveillance System Based on Cognitive Internet of Things, Abstract: Drones, also known as mini-unmanned aerial vehicles, have attracted increasing attention due to their boundless applications in communications, photography, agriculture, surveillance and numerous public services. However, the deployment of amateur drones poses various safety, security and privacy threats. To cope with these challenges, amateur drone surveillance becomes a very important but largely unexplored topic. In this article, we firstly present a brief survey to show the state-of-the-art studies on amateur drone surveillance. Then, we propose a vision, named Dragnet, by tailoring the recent emerging cognitive internet of things framework for amateur drone surveillance. Next, we discuss the key enabling techniques for Dragnet in details, accompanied with the technical challenges and open issues. Furthermore, we provide an exemplary case study on the detection and classification of authorized and unauthorized amateur drones, where, for example, an important event is being held and only authorized drones are allowed to fly over.
[ 1, 0, 0, 0, 0, 0 ]
Title: Two-species boson mixture on a ring: A group theoretic approach to the quantum dynamics of low-energy excitations, Abstract: We investigate the weak excitations of a system made up of two condensates trapped in a Bose-Hubbard ring and coupled by an interspecies repulsive interaction. Our approach, based on the Bogoliubov approximation scheme, shows that one can reduce the problem Hamiltonian to the sum of sub-Hamiltonians $\hat{H}_k$, each one associated to momentum modes $\pm k$. Each $\hat{H}_k$ is then recognized to be an element of a dynamical algebra. This uncommon and remarkable property allows us to present a straightforward diagonalization scheme, to find constants of motion, to highlight the significant microscopic processes, and to compute their time evolution. The proposed solution scheme is applied to a simple but still very interesting closed circuit, the trimer. The dynamics of low-energy excitations, corresponding to weakly-populated vortices, is investigated considering different choices of the initial conditions, and the angular-momentum transfer between the two condensates is evidenced. Finally, the condition for which the spectral collapse and dynamical instability are observed is derived analytically.
[ 0, 1, 0, 0, 0, 0 ]
Title: On a simple model of X_0(N), Abstract: We find plane models for all $X_0(N)$, $N\geq 2$. We observe a map from the modular curve $X_0(N)$ to the projective plane constructed using modular forms of weight $12$ for the group $\Gamma_0(N)$; the Ramanujan function $\Delta$, $\Delta(N\cdot)$ and the third power of Eisestein series of weight $4$, $E_4^3$, and prove that this map is birational equivalence for every $N\geq 2$. The equation of the model is the minimal polynomial of $\Delta(N\cdot)/\Delta$ over $\mathbb{C}(j)$.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Supervised STDP-based Training Algorithm for Living Neural Networks, Abstract: Neural networks have shown great potential in many applications like speech recognition, drug discovery, image classification, and object detection. Neural network models are inspired by biological neural networks, but they are optimized to perform machine learning tasks on digital computers. The proposed work explores the possibilities of using living neural networks in vitro as basic computational elements for machine learning applications. A new supervised STDP-based learning algorithm is proposed in this work, which considers neuron engineering constrains. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten digit recognition.
[ 1, 0, 0, 1, 0, 0 ]
Title: Optimal Topology Design for Disturbance Minimization in Power Grids, Abstract: The transient response of power grids to external disturbances influences their stable operation. This paper studies the effect of topology in linear time-invariant dynamics of different power grids. For a variety of objective functions, a unified framework based on $H_2$ norm is presented to analyze the robustness to ambient fluctuations. Such objectives include loss reduction, weighted consensus of phase angle deviations, oscillations in nodal frequency, and other graphical metrics. The framework is then used to study the problem of optimal topology design for robust control goals of different grids. For radial grids, the problem is shown as equivalent to the hard "optimum communication spanning tree" problem in graph theory and a combinatorial topology construction is presented with bounded approximation gap. Extended to loopy (meshed) grids, a greedy topology design algorithm is discussed. The performance of the topology design algorithms under multiple control objectives are presented on both loopy and radial test grids. Overall, this paper analyzes topology design algorithms on a broad class of control problems in power grid by exploring their combinatorial and graphical properties.
[ 1, 0, 1, 0, 0, 0 ]
Title: Asymptotic behaviour of the Christoffel functions on the Unit Ball in the presence of a Mass on the Sphere, Abstract: We present a family of mutually orthogonal polynomials on the unit ball with respect to an inner product which includes a mass uniformly distributed on the sphere. First, connection formulas relating these multivariate orthogonal polynomials and the classical ball polynomials are obtained. Then, using the representation formula for these polynomials in terms of spherical harmonics analytic properties will be deduced. Finally, we analyze the asymptotic behaviour of the Christoffel functions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Tangent: Automatic Differentiation Using Source Code Transformation in Python, Abstract: Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages are that Tangent generates gradient code in Python which is readable by the user, easy to understand and debug, and has no runtime overhead. Tangent also introduces abstractions for easily injecting logic into the generated gradient code, further improving usability.
[ 1, 0, 0, 1, 0, 0 ]
Title: Ca II K 1-A Emission Index Composites, Abstract: We describe here a procedure to combine measurements in the 393.37 nm Ca II K spectral line taken at different observatories. Measurements from the National Solar Observatory (NSO) Integrated Sunlight Spectrometer (ISS) on the Synoptic Optical Long-term Investigations of the Sun (SOLIS) telescope, the NSO/Sac Peak Ca II K-Line Monitoring Program, and Ca II K filtergrams from Kodaikanal Solar Observatory (KKL) are merged together to create a pair of composites of the Ca II K 1-A emission index. These composites are publicly available from the SOLIS website at this http URL.
[ 0, 1, 0, 0, 0, 0 ]
Title: Application of the Bead Perturbation Technique to a Study of a Tunable 5 GHz Annular Cavity, Abstract: Microwave cavities for a Sikivie-type axion search are subject to several constraints. In the fabrication and operation of such cavities, often used at frequencies where the resonator is highly overmoded, it is important to be able to reliably identify several properties of the cavity. Those include identifying the symmetry of the mode of interest, confirming its form factor, and determining the frequency ranges where mode crossings with intruder levels cause unacceptable admixture, thus leading to the loss of purity of the mode of interest. A simple and powerful diagnostic for mapping out the electric field of a cavity is the bead perturbation technique. While a standard tool in accelerator physics, we have, for the first time, applied this technique to cavities used in the axion search. We report initial results from an extensive study for the initial cavity used in the HAYSTAC experiment. Two effects have been investigated: the role of rod misalignment in mode localization, and mode-mixing at avoided crossings of TM/TE modes. Future work will extend these results by incorporating precision metrology and high-fidelity simulations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Homogeneous Kobayashi-hyperbolic manifolds with high-dimensional group of holomorphic automorphisms, Abstract: We determine all connected homogeneous Kobayashi-hyperbolic manifolds of dimension $n\ge 2$ whose holomorphic automorphism group has dimension $n^2-2$. This result complements an existing classification for automorphism group dimension $n^2-1$ and greater obtained without the homogeneity assumption.
[ 0, 0, 1, 0, 0, 0 ]
Title: New estimates for some functions defined over primes, Abstract: In this paper we first establish new explicit estimates for Chebyshev's $\vartheta$-function. Applying these new estimates, we derive new upper and lower bounds for some functions defined over the prime numbers, for instance the prime counting function $\pi(x)$, which improve the currently best ones. Furthermore, we use the obtained estimates for the prime counting function to give two new results concerning the existence of prime numbers in short intervals.
[ 0, 0, 1, 0, 0, 0 ]
Title: Transferrable End-to-End Learning for Protein Interface Prediction, Abstract: While there has been an explosion in the number of experimentally determined, atomically detailed structures of proteins, how to represent these structures in a machine learning context remains an open research question. In this work we demonstrate that representations learned from raw atomic coordinates can outperform hand-engineered structural features while displaying a much higher degree of transferrability. To do so, we focus on a central problem in biology: predicting how proteins interact with one another--that is, which surfaces of one protein bind to which surfaces of another protein. We present Siamese Atomic Surfacelet Network (SASNet), the first end-to-end learning method for protein interface prediction. Despite using only spatial coordinates and identities of atoms as inputs, SASNet outperforms state-of-the-art methods that rely on hand-engineered, high-level features. These results are particularly striking because we train the method entirely on a significantly biased data set that does not account for the fact that proteins deform when binding to one another. Demonstrating the first successful application of transfer learning to atomic-level data, our network maintains high performance, without retraining, when tested on real cases in which proteins do deform.
[ 0, 0, 0, 1, 1, 0 ]
Title: A new astrophysical solution to the Too Big To Fail problem - Insights from the MoRIA simulations, Abstract: We test whether advanced galaxy models and analysis techniques of simulations can alleviate the Too Big To Fail problem (TBTF) for late-type galaxies, which states that isolated dwarf galaxy kinematics imply that dwarfs live in lower-mass halos than is expected in a {\Lambda}CDM universe. Furthermore, we want to explain this apparent tension between theory and observations. To do this, we use the MoRIA suite of dwarf galaxy simulations to investigate whether observational effects are involved in TBTF for late-type field dwarf galaxies. To this end, we create synthetic radio data cubes of the simulated MoRIA galaxies and analyse their HI kinematics as if they were real, observed galaxies. We find that for low-mass galaxies, the circular velocity profile inferred from the HI kinematics often underestimates the true circular velocity profile, as derived directly from the enclosed mass. Fitting the HI kinematics of MoRIA dwarfs with a theoretical halo profile results in a systematic underestimate of the mass of their host halos. We attribute this effect to the fact that the interstellar medium of a low-mass late-type dwarf is continuously stirred by supernova explosions into a vertically puffed-up, turbulent state to the extent that the rotation velocity of the gas is simply no longer a good tracer of the underlying gravitational force field. If this holds true for real dwarf galaxies as well, it implies that they inhabit more massive dark matter halos than would be inferred from their kinematics, solving TBTF for late-type field dwarf galaxies.
[ 0, 1, 0, 0, 0, 0 ]
Title: Friction Variability in Planar Pushing Data: Anisotropic Friction and Data-collection Bias, Abstract: Friction plays a key role in manipulating objects. Most of what we do with our hands, and most of what robots do with their grippers, is based on the ability to control frictional forces. This paper aims to better understand the variability and predictability of planar friction. In particular, we focus on the analysis of a recent dataset on planar pushing by Yu et al. [1] devised to create a data-driven footprint of planar friction. We show in this paper how we can explain a significant fraction of the observed unconventional phenomena, e.g., stochasticity and multi-modality, by combining the effects of material non-homogeneity, anisotropy of friction and biases due to data collection dynamics, hinting that the variability is explainable but inevitable in practice. We introduce an anisotropic friction model and conduct simulation experiments comparing with more standard isotropic friction models. The anisotropic friction between object and supporting surface results in convergence of initial condition during the automated data collection. Numerical results confirm that the anisotropic friction model explains the bias in the dataset and the apparent stochasticity in the outcome of a push. The fact that the data collection process itself can originate biases in the collected datasets, resulting in deterioration of trained models, calls attention to the data collection dynamics.
[ 1, 0, 0, 0, 0, 0 ]
Title: Automatically Annotated Turkish Corpus for Named Entity Recognition and Text Categorization using Large-Scale Gazetteers, Abstract: Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).
[ 1, 0, 0, 0, 0, 0 ]
Title: Strong Completeness and the Finite Model Property for Bi-Intuitionistic Stable Tense Logics, Abstract: Bi-Intuitionistic Stable Tense Logics (BIST Logics) are tense logics with a Kripke semantics where worlds in a frame are equipped with a pre-order as well as with an accessibility relation which is 'stable' with respect to this pre-order. BIST logics are extensions of a logic, BiSKt, which arose in the semantic context of hypergraphs, since a special case of the pre-order can represent the incidence structure of a hypergraph. In this paper we provide, for the first time, a Hilbert-style axiomatisation of BISKt and prove the strong completeness of BiSKt. We go on to prove strong completeness of a class of BIST logics obtained by extending BiSKt by formulas of a certain form. Moreover we show that the finite model property and the decidability hold for a class of BIST logics.
[ 1, 0, 1, 0, 0, 0 ]
Title: Using lab notebooks to examine students' engagement in modeling in an upper-division electronics lab course, Abstract: We demonstrate how students' use of modeling can be examined and assessed using student notebooks collected from an upper-division electronics lab course. The use of models is a ubiquitous practice in undergraduate physics education, but the process of constructing, testing, and refining these models is much less common. We focus our attention on a lab course that has been transformed to engage students in this modeling process during lab activities. The design of the lab activities was guided by a framework that captures the different components of model-based reasoning, called the Modeling Framework for Experimental Physics. We demonstrate how this framework can be used to assess students' written work and to identify how students' model-based reasoning differed from activity to activity. Broadly speaking, we were able to identify the different steps of students' model-based reasoning and assess the completeness of their reasoning. Varying degrees of scaffolding present across the activities had an impact on how thoroughly students would engage in the full modeling process, with more scaffolded activities resulting in more thorough engagement with the process. Finally, we identified that the step in the process with which students had the most difficulty was the comparison between their interpreted data and their model prediction. Students did not use sufficiently sophisticated criteria in evaluating such comparisons, which had the effect of halting the modeling process. This may indicate that in order to engage students further in using model-based reasoning during lab activities, the instructor needs to provide further scaffolding for how students make these types of experimental comparisons. This is an important design consideration for other such courses attempting to incorporate modeling as a learning goal.
[ 0, 1, 0, 0, 0, 0 ]
Title: An accurate finite element method for the numerical solution of isothermal and incompressible flow of viscous fluid, Abstract: Despite its numerical challenges, finite element method is used to compute viscous fluid flow. A consensus on the cause of numerical problems has been reached; however, general algorithms---allowing a robust and accurate simulation for any process---are still missing. Either a very high computational cost is necessary for a direct numerical solution (DNS) or some limiting procedure is used by adding artificial dissipation to the system. These stabilization methods are useful; however, they are often applied relative to the element size such that a local monotonous convergence is challenging to acquire. We need a computational strategy for solving viscous fluid flow using solely the balance equations. In this work, we present a general procedure solving fluid mechanics problems without use of any stabilization or splitting schemes. Hence, its generalization to multiphysics applications is straightforward. We discuss emerging numerical problems and present the methodology rigorously. Implementation is achieved by using open-source packages and the accuracy as well as the robustness is demonstrated by comparing results to the closed-form solutions and also by solving well-known benchmarking problems.
[ 1, 1, 0, 0, 0, 0 ]
Title: Re-evaluating Evaluation, Abstract: Progress in machine learning is measured by careful evaluation on problems of outstanding common interest. However, the proliferation of benchmark suites and environments, adversarial attacks, and other complications has diluted the basic evaluation model by overwhelming researchers with choices. Deliberate or accidental cherry picking is increasingly likely, and designing well-balanced evaluation suites requires increasing effort. In this paper we take a step back and propose Nash averaging. The approach builds on a detailed analysis of the algebraic structure of evaluation in two basic scenarios: agent-vs-agent and agent-vs-task. The key strength of Nash averaging is that it automatically adapts to redundancies in evaluation data, so that results are not biased by the incorporation of easy tasks or weak agents. Nash averaging thus encourages maximally inclusive evaluation -- since there is no harm (computational cost aside) from including all available tasks and agents.
[ 0, 0, 0, 1, 0, 0 ]
Title: Ranking and Selection as Stochastic Control, Abstract: Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in statistical ranking and selection as a stochastic control problem, and derive the associated Bellman equation. Using value function approximation, we derive an approximately optimal allocation policy. We show that this policy is not only computationally efficient but also possesses both one-step-ahead and asymptotic optimality for independent normal sampling distributions. Moreover, the proposed allocation policy is easily generalizable in the approximate dynamic programming paradigm.
[ 1, 0, 0, 1, 0, 0 ]
Title: An application of $Γ$-semigroups techniques to the Green's Theorem, Abstract: The concept of a $\Gamma$-semigroup has been introduced by Mridul Kanti Sen in the Int. Symp., New Delhi, 1981. It is well known that the Green's relations play an essential role in studying the structure of semigroups. In the present paper we deal with an application of $\Gamma$-semigroups techniques to the Green's Theorem in an attempt to show the way we pass from semigroups to $\Gamma$-semigroups.
[ 0, 0, 1, 0, 0, 0 ]
Title: Reflection from a multi-species material and its transmitted effective wavenumber, Abstract: We formally deduce closed-form expressions for the transmitted effective wavenumber of a material comprising multiple types of inclusions or particles (multi-species), dispersed in a uniform background medium. The expressions, derived here for the first time, are valid for moderate volume fractions and without restriction on the frequency. We show that the multi-species effective wavenumber is not a straightforward extension of expressions for a single species. Comparisons are drawn with state-of-the-art models in acoustics by presenting numerical results for a concrete and a water-oil emulsion in two dimensions. The limit of when one species is much smaller than the other is also discussed and we determine the background medium felt by the larger species in this limit. Surprisingly, we show that the answer is not the intuitive result predicted by self-consistent multiple scattering theories. The derivation presented here applies to the scalar wave equation with cylindrical or spherical inclusions, with any distribution of sizes, densities, and wave speeds. The reflection coefficient associated with a half-space of multi-species cylindrical inclusions is also formally derived.
[ 0, 1, 0, 0, 0, 0 ]
Title: Colouring perfect graphs with bounded clique number, Abstract: A graph is perfect if the chromatic number of every induced subgraph equals the size of its largest clique, and an algorithm of Grötschel, Lovász, and Schrijver from 1988 finds an optimal colouring of a perfect graph in polynomial time. But this algorithm uses the ellipsoid method, and it is a well-known open question to construct a "combinatorial" polynomial-time algorithm that yields an optimal colouring of a perfect graph. A skew partition in $G$ is a partition $(A,B)$ of $V(G)$ such that $G[A]$ is not connected and $\bar{G}[B]$ is not connected, where $\bar{G}$ denotes the complement graph ; and it is balanced if an additional parity condition of paths in $G$ and $\bar{G}$ is satisfied. In this paper we first give a polynomial-time algorithm that, with input a perfect graph, outputs a balanced skew partition if there is one. Then we use this to obtain a combinatorial algorithm that finds an optimal colouring of a perfect graph with clique number $k$, in time that is polynomial for fixed $k$.
[ 1, 0, 0, 0, 0, 0 ]
Title: Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack, Abstract: In this paper, we show synchronization for a group of output passive agents that communicate with each other according to an underlying communication graph to achieve a common goal. We propose a distributed event-triggered control framework that will guarantee synchronization and considerably decrease the required communication load on the band-limited network. We define a general Byzantine attack on the event-triggered multi-agent network system and characterize its negative effects on synchronization. The Byzantine agents are capable of intelligently falsifying their data and manipulating the underlying communication graph by altering their respective control feedback weights. We introduce a decentralized detection framework and analyze its steady-state and transient performances. We propose a way of identifying individual Byzantine neighbors and a learning-based method of estimating the attack parameters. Lastly, we propose learning-based control approaches to mitigate the negative effects of the adversarial attack.
[ 1, 0, 0, 1, 0, 0 ]
Title: EMG-Controlled Hand Teleoperation Using a Continuous Teleoperation Subspace, Abstract: We present a method for EMG-driven teleoperation of non-anthropomorphic robot hands. EMG sensors are appealing as a wearable, inexpensive and unobtrusive way to gather information about the teleoperator's hand pose. However, mapping from EMG signals to the pose space of a non-anthropomorphic hand presents multiple challenges. We present a method that first projects from forearm EMG into a subspace relevant to teleoperation. To increase robustness, we use a model which combines continuous and discrete predictors along different dimensions of this subspace. We then project from the teleoperation subspace into the pose space of the robot hand. We show that our method is effective and intuitive, as it enables novice users to teleoperate pick and place tasks faster and more robustly than state-of-the-art EMG teleoperation methods when applied to a non-anthropomorphic, multi-DOF robot hand.
[ 1, 0, 0, 0, 0, 0 ]
Title: Numerical solutions of Hamiltonian PDEs: a multi-symplectic integrator in light-cone coordinates, Abstract: We introduce a novel numerical method to integrate partial differential equations representing the Hamiltonian dynamics of field theories. It is a multi-symplectic integrator that locally conserves the stress-energy tensor with an excellent precision over very long periods. Its major advantage is that it is extremely simple (it is basically a centered box scheme) while remaining locally well defined. We put it to the test in the case of the non-linear wave equation (with quartic potential) in one spatial dimension, and we explain how to implement it in higher dimensions. A formal geometric presentation of the multi-symplectic structure is also given as well as a technical trick allowing to solve the degeneracy problem that potentially accompanies the multi-symplectic structure.
[ 0, 1, 1, 0, 0, 0 ]
Title: Magnetite nano-islands on silicon-carbide with graphene, Abstract: X-ray magnetic circular dichroism (XMCD) measurements of iron nano-islands grown on graphene and covered with a Au film for passivation reveal that the oxidation through defects in the Au film spontaneously leads to the formation of magnetite nano-particles (i.e, $Fe_3$$O_4$). The Fe nano-islands (20 and 75 monolayers; MLs) are grown on epitaxial graphene formed by thermally annealing 6H-SiC(0001) and subsequently covered, in the growth chamber, with nominal 20 layers of Au. Our X-ray absorption spectroscopy and XMCD measurements at applied magnetic fields show that the thin film (20 ML) is totally converted to magnetite whereas the thicker film (75 ML) exhibits properties of magnetite but also those of pure metallic iron. Temperature dependence of the XMCD signal (of both samples) shows a clear transition at $T_{\rm V}\approx 120$ K consistent with the Verwey transition of bulk magnetite. These results have implications on the synthesis of magnetite nano-crystals and also on their regular arrangements on functional substrates such as graphene.
[ 0, 1, 0, 0, 0, 0 ]
Title: Activation Ensembles for Deep Neural Networks, Abstract: Many activation functions have been proposed in the past, but selecting an adequate one requires trial and error. We propose a new methodology of designing activation functions within a neural network at each layer. We call this technique an "activation ensemble" because it allows the use of multiple activation functions at each layer. This is done by introducing additional variables, $\alpha$, at each activation layer of a network to allow for multiple activation functions to be active at each neuron. By design, activations with larger $\alpha$ values at a neuron is equivalent to having the largest magnitude. Hence, those higher magnitude activations are "chosen" by the network. We implement the activation ensembles on a variety of datasets using an array of Feed Forward and Convolutional Neural Networks. By using the activation ensemble, we achieve superior results compared to traditional techniques. In addition, because of the flexibility of this methodology, we more deeply explore activation functions and the features that they capture.
[ 0, 0, 0, 1, 0, 0 ]
Title: Formation of coalition structures as a non-cooperative game, Abstract: Traditionally social sciences are interested in structuring people in multiple groups based on their individual preferences. This pa- per suggests an approach to this problem in the framework of a non- cooperative game theory. Definition of a suggested finite game includes a family of nested simultaneous non-cooperative finite games with intra- and inter-coalition externalities. In this family, games differ by the size of maximum coalition, partitions and by coalition structure formation rules. A result of every game consists of partition of players into coalitions and a payoff? profiles for every player. Every game in the family has an equilibrium in mixed strategies with possibly more than one coalition. The results of the game differ from those conventionally discussed in cooperative game theory, e.g. the Shapley value, strong Nash, coalition-proof equilibrium, core, kernel, nucleolus. We discuss the following applications of the new game: cooperation as an allocation in one coalition, Bayesian games, stochastic games and construction of a non-cooperative criterion of coalition structure stability for studying focal points.
[ 1, 0, 1, 0, 0, 0 ]
Title: Smooth and Sparse Optimal Transport, Abstract: Entropic regularization is quickly emerging as a new standard in optimal transport (OT). It enables to cast the OT computation as a differentiable and unconstrained convex optimization problem, which can be efficiently solved using the Sinkhorn algorithm. However, entropy keeps the transportation plan strictly positive and therefore completely dense, unlike unregularized OT. This lack of sparsity can be problematic in applications where the transportation plan itself is of interest. In this paper, we explore regularizing the primal and dual OT formulations with a strongly convex term, which corresponds to relaxing the dual and primal constraints with smooth approximations. We show how to incorporate squared $2$-norm and group lasso regularizations within that framework, leading to sparse and group-sparse transportation plans. On the theoretical side, we bound the approximation error introduced by regularizing the primal and dual formulations. Our results suggest that, for the regularized primal, the approximation error can often be smaller with squared $2$-norm than with entropic regularization. We showcase our proposed framework on the task of color transfer.
[ 1, 0, 0, 1, 0, 0 ]
Title: The stability and energy exchange mechanism of divergent states with real energy, Abstract: The eigenvalue of the hermitic Hamiltonian is real undoubtedly. Actually, The reality can also be guaranteed by the $PT$-symmetry. The hermiticity and the $PT$-symmetric quantum theory both have requirements regarding the boundary condition. There exists a reverse strategy to investigate the quantum problem. Namely, define the eigenvalue as real first, and, meanwhile, open the boundary condition. Then the behaviors of the wave function at the boundary become rich in meaning. This eigenfunction is generally divergent, and the extent and direction of divergence are closely linked to the energy. It was noted that these divergent behaviors can be well described by their energy-space uncertainty relation which is not trivial anymore. The divergent state is unstable and will certainly exchange energy with the outside. The mechanism of energy exchange is just in the energy-space uncertainty relation, which will benefit dynamic simulation, the many-body problem, and so on. There is no distinct dividing line between this kind of divergent unstable state and the convergent stable state. Their relationship is like that of the rational and irrational numbers. In practice, there are distinct advantages of speed and accuracy for the methods based on the laws of divergence.
[ 0, 1, 0, 0, 0, 0 ]
Title: A bound on partitioning clusters, Abstract: Let $X$ be a finite collection of sets (or "clusters"). We consider the problem of counting the number of ways a cluster $A \in X$ can be partitioned into two disjoint clusters $A_1, A_2 \in X$, thus $A = A_1 \uplus A_2$ is the disjoint union of $A_1$ and $A_2$; this problem arises in the run time analysis of the ASTRAL algorithm in phylogenetic reconstruction. We obtain the bound $$ | \{ (A_1,A_2,A) \in X \times X \times X: A = A_1 \uplus A_2 \} | \leq |X|^{3/p} $$ where $|X|$ denotes the cardinality of $X$, and $p := \log_3 \frac{27}{4} = 1.73814\dots$, so that $\frac{3}{p} = 1.72598\dots$. Furthermore, the exponent $p$ cannot be replaced by any larger quantity. This improves upon the trivial bound of $|X|^2$. The argument relies on establishing a one-dimensional convolution inequality that can be established by elementary calculus combined with some numerical verification. In a similar vein, we show that for any subset $A$ of a discrete cube $\{0,1\}^n$, the additive energy of $A$ (the number of quadruples $(a_1,a_2,a_3,a_4)$ in $A^4$ with $a_1+a_2=a_3+a_4$) is at most $|A|^{\log_2 6}$, and that this exponent is best possible.
[ 0, 0, 1, 0, 0, 0 ]
Title: X-ray emission from thin plasmas. Collisional ionization for atoms and ions of H to Zn, Abstract: Every observation of astrophysical objects involving a spectrum requires atomic data for the interpretation of line fluxes, line ratios and ionization state of the emitting plasma. One of the processes which determines it is collisional ionization. In this study an update of the direct ionization (DI) and excitation-autoionization (EA) processes is discussed for the H to Zn-like isoelectronic sequences. In the last years new laboratory measurements and theoretical calculations of ionization cross sections have become available. We provide an extension and update of previous published reviews in the literature. We include the most recent experimental measurements and fit the cross sections of all individual shells of all ions from H to Zn. These data are described using an extension of Younger's and Mewe's formula, suitable for integration over a Maxwellian velocity distribution to derive the subshell ionization rate coefficients. These ionization rate coefficients are incorporated in the high-resolution plasma code and spectral fitting tool SPEX V3.0.
[ 0, 1, 0, 0, 0, 0 ]
Title: Support Spinor Machine, Abstract: We generalize a support vector machine to a support spinor machine by using the mathematical structure of wedge product over vector machine in order to extend field from vector field to spinor field. The separated hyperplane is extended to Kolmogorov space in time series data which allow us to extend a structure of support vector machine to a support tensor machine and a support tensor machine moduli space. Our performance test on support spinor machine is done over one class classification of end point in physiology state of time series data after empirical mode analysis and compared with support vector machine test. We implement algorithm of support spinor machine by using Holo-Hilbert amplitude modulation for fully nonlinear and nonstationary time series data analysis.
[ 1, 0, 0, 1, 0, 0 ]
Title: Various generalizations and deformations of $PSL(2,\mathbb{R})$ surface group representations and their Higgs bundles, Abstract: Recall that the group $PSL(2,\mathbb R)$ is isomorphic to $PSp(2,\mathbb R),\ SO_0(1,2)$ and $PU(1,1).$ The goal of this paper is to examine the various ways in which Fuchsian representations of the fundamental group of a closed surface of genus $g$ into $PSL(2,\mathbb R)$ and their associated Higgs bundles generalize to the higher rank groups $PSL(n,\mathbb R),\ PSp(2n,\mathbb R),\ SO_0(2,n),\ SO_0(n,n+1)$ and $PU(n,n)$. For the $SO_0(n,n+1)$-character variety, we parameterize $n(2g-2)$ new connected components as the total space of vector bundles over appropriate symmetric powers of the surface and study how these components deform in the $SO_0(n,n+2)$-character variety. This generalizes results of Hitchin for $PSL(2,\mathbb R)$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Redundancy schemes for engineering coherent systems via a signature-based approach, Abstract: This paper proposes a signature-based approach for solving redundancy allocation problems when component lifetimes are not only heterogeneous but also dependent. The two common schemes for allocations, that is active and standby redundancies, are considered. If the component lifetimes are independent, the proposed approach leads to simple manipulations. Various illustrative examples are also analysed. This method can be implemented for practical complex engineering systems.
[ 0, 0, 0, 1, 0, 0 ]
Title: Anisotropy effects on Baryogenesis in $f(R)$-Theories of Gravity, Abstract: We study the $f(R)$ theory of gravity in an anisotropic metric and its effect on the baryon number to entropy ratio. The mechanism of gravitational baryogenesis based on the CPT-violating gravitational interaction between derivative of the Ricci scalar curvature and the baryon-number current is investigated in the context of the $f(R)$ gravity. The gravitational baryogenesis in the Bianchi type I universe is examined. We survey the effect of anisotropy of the universe on the baryon asymmetry from point of view the $f(R)$-theories of gravity and its effect on $n_{b}/s$ for radiation dominant regime.
[ 0, 1, 0, 0, 0, 0 ]
Title: Robust and Real-time Deep Tracking Via Multi-Scale Domain Adaptation, Abstract: Visual tracking is a fundamental problem in computer vision. Recently, some deep-learning-based tracking algorithms have been achieving record-breaking performances. However, due to the high complexity of deep learning, most deep trackers suffer from low tracking speed, and thus are impractical in many real-world applications. Some new deep trackers with smaller network structure achieve high efficiency while at the cost of significant decrease on precision. In this paper, we propose to transfer the feature for image classification to the visual tracking domain via convolutional channel reductions. The channel reduction could be simply viewed as an additional convolutional layer with the specific task. It not only extracts useful information for object tracking but also significantly increases the tracking speed. To better accommodate the useful feature of the target in different scales, the adaptation filters are designed with different sizes. The yielded visual tracker is real-time and also illustrates the state-of-the-art accuracies in the experiment involving two well-adopted benchmarks with more than 100 test videos.
[ 1, 0, 0, 0, 0, 0 ]
Title: Simply Exponential Approximation of the Permanent of Positive Semidefinite Matrices, Abstract: We design a deterministic polynomial time $c^n$ approximation algorithm for the permanent of positive semidefinite matrices where $c=e^{\gamma+1}\simeq 4.84$. We write a natural convex relaxation and show that its optimum solution gives a $c^n$ approximation of the permanent. We further show that this factor is asymptotically tight by constructing a family of positive semidefinite matrices.
[ 1, 0, 0, 0, 0, 0 ]
Title: Stochastic Variance Reduction for Policy Gradient Estimation, Abstract: Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) to model-free policy gradient to significantly improve the sample-efficiency. The SVRG estimation is incorporated into a trust-region Newton conjugate gradient framework for the policy optimization. On several Mujoco tasks, our method achieves significantly better performance compared to the state-of-the-art model-free policy gradient methods in robotic continuous control such as trust region policy optimization (TRPO)
[ 1, 0, 0, 1, 0, 0 ]
Title: Weighted Low-Rank Approximation of Matrices and Background Modeling, Abstract: We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.
[ 1, 0, 0, 0, 0, 0 ]
Title: Master equation for She-Leveque scaling and its classification in terms of other Markov models of developed turbulence, Abstract: We derive the Markov process equivalent to She-Leveque scaling in homogeneous and isotropic turbulence. The Markov process is a jump process for velocity increments $u(r)$ in scale $r$ in which the jumps occur randomly but with deterministic width in $u$. From its master equation we establish a prescription to simulate the She-Leveque process and compare it with Kolmogorov scaling. To put the She-Leveque process into the context of other established turbulence models on the Markov level, we derive a diffusion process for $u(r)$ from two properties of the Navier-Stokes equation. This diffusion process already includes Kolmogorov scaling, extended self-similarity and a class of random cascade models. The fluctuation theorem of this Markov process implies a "second law" that puts a loose bound on the multipliers of the random cascade models. This bound explicitly allows for inverse cascades, which are necessary to satisfy the fluctuation theorem. By adding a jump process to the diffusion process, we go beyond Kolmogorov scaling and formulate the most general scaling law for the class of Markov processes having both diffusion and jump parts. This Markov scaling law includes She-Leveque scaling and a scaling law derived by Yakhot.
[ 0, 1, 0, 0, 0, 0 ]
Title: Scaling Universality at the Dynamic Vortex Mott Transition, Abstract: The dynamic Mott insulator-to-metal transition (DMT) is key to many intriguing phenomena in condensed matter physics yet it remains nearly unexplored. The cleanest way to observe DMT, without the interference from disorder and other effects inherent to electronic and atomic systems, is to employ the vortex Mott states formed by superconducting vortices in a regular array of pinning sites. The applied electric current delocalizes vortices and drives the dynamic vortex Mott transition. Here we report the critical behavior of the vortex system as it crosses the DMT line, driven by either current or temperature. We find universal scaling with respect to both, expressed by the same scaling function and characterized by a single critical exponent coinciding with the exponent for the thermodynamic Mott transition. We develop a theory for the DMT based on the parity reflection-time reversal (PT) symmetry breaking formalism and find that the nonequilibrium-induced Mott transition has the same critical behavior as thermal Mott transition. Our findings demonstrate the existence of physical systems in which the effect of nonequilibrium drive is to generate effective temperature and hence the transition belonging in the thermal universality class. We establish PT symmetry-breaking as a universal mechanism for out-of-equilibrium phase transitions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Assessment Formats and Student Learning Performance: What is the Relation?, Abstract: Although compelling assessments have been examined in recent years, more studies are required to yield a better understanding of the several methods where assessment techniques significantly affect student learning process. Most of the educational research in this area does not consider demographics data, differing methodologies, and notable sample size. To address these drawbacks, the objective of our study is to analyse student learning outcomes of multiple assessment formats for a web-facilitated in-class section with an asynchronous online class of a core data communications course in the Undergraduate IT program of the Information Sciences and Technology (IST) Department at George Mason University (GMU). In this study, students were evaluated based on course assessments such as home and lab assignments, skill-based assessments, and traditional midterm and final exams across all four sections of the course. All sections have equivalent content, assessments, and teaching methodologies. Student demographics such as exam type and location preferences are considered in our study to determine whether they have any impact on their learning approach. Large amount of data from the learning management system (LMS), Blackboard (BB) Learn, had to be examined to compare the results of several assessment outcomes for all students within their respective section and amongst students of other sections. To investigate the effect of dissimilar assessment formats on student performance, we had to correlate individual question formats with the overall course grade. The results show that collective assessment formats allow students to be effective in demonstrating their knowledge.
[ 1, 0, 0, 1, 0, 0 ]
Title: Sharing deep generative representation for perceived image reconstruction from human brain activity, Abstract: Decoding human brain activities via functional magnetic resonance imaging (fMRI) has gained increasing attention in recent years. While encouraging results have been reported in brain states classification tasks, reconstructing the details of human visual experience still remains difficult. Two main challenges that hinder the development of effective models are the perplexing fMRI measurement noise and the high dimensionality of limited data instances. Existing methods generally suffer from one or both of these issues and yield dissatisfactory results. In this paper, we tackle this problem by casting the reconstruction of visual stimulus as the Bayesian inference of missing view in a multiview latent variable model. Sharing a common latent representation, our joint generative model of external stimulus and brain response is not only "deep" in extracting nonlinear features from visual images, but also powerful in capturing correlations among voxel activities of fMRI recordings. The nonlinearity and deep structure endow our model with strong representation ability, while the correlations of voxel activities are critical for suppressing noise and improving prediction. We devise an efficient variational Bayesian method to infer the latent variables and the model parameters. To further improve the reconstruction accuracy, the latent representations of testing instances are enforced to be close to that of their neighbours from the training set via posterior regularization. Experiments on three fMRI recording datasets demonstrate that our approach can more accurately reconstruct visual stimuli.
[ 1, 0, 0, 0, 0, 0 ]
Title: Two types of criticality in the brain, Abstract: Neural networks with equal excitatory and inhibitory feedback show high computational performance. They operate close to a critical point characterized by the joint activation of large populations of neurons. Yet, in macaque motor cortex we observe very different dynamics with weak fluctuations on the population level. This suggests that motor cortex operates in a sub-optimal regime. Here we show the opposite: the large dispersion of correlations across neurons is a signature of a rich dynamical repertoire, hidden from macroscopic brain signals, but essential for high performance in such concepts as reservoir computing. Our findings suggest a refinement of the view on criticality in neural systems: network topology and heterogeneity endow the brain with two complementary substrates for critical dynamics of largely different complexities.
[ 0, 1, 0, 0, 0, 0 ]
Title: Topological networks for quantum communication between distant qubits, Abstract: Efficient communication between qubits relies on robust networks which allow for fast and coherent transfer of quantum information. It seems natural to harvest the remarkable properties of systems characterized by topological invariants to perform this task. Here we show that a linear network of coupled bosonic degrees of freedom, characterized by topological bands, can be employed for the efficient exchange of quantum information over large distances. Important features of our setup are that it is robust against quenched disorder, all relevant operations can be performed by global variations of parameters, and the time required for communication between distant qubits approaches linear scaling with their distance. We demonstrate that our concept can be extended to an ensemble of qubits embedded in a two-dimensional network to allow for communication between all of them.
[ 0, 1, 0, 0, 0, 0 ]
Title: How Criticality of Gene Regulatory Networks Affects the Resulting Morphogenesis under Genetic Perturbations, Abstract: Whereas the relationship between criticality of gene regulatory networks (GRNs) and dynamics of GRNs at a single cell level has been vigorously studied, the relationship between the criticality of GRNs and system properties at a higher level has remained unexplored. Here we aim at revealing a potential role of criticality of GRNs at a multicellular level which are hard to uncover through the single-cell-level studies, especially from an evolutionary viewpoint. Our model simulated the growth of a cell population from a single seed cell. All the cells were assumed to have identical GRNs. We induced genetic perturbations to the GRN of the seed cell by adding, deleting, or switching a regulatory link between a pair of genes. From numerical simulations, we found that the criticality of GRNs facilitated the formation of nontrivial morphologies when the GRNs were critical in the presence of the evolutionary perturbations. Moreover, the criticality of GRNs produced topologically homogenous cell clusters by adjusting the spatial arrangements of cells, which led to the formation of nontrivial morphogenetic patterns. Our findings corresponded to an epigenetic viewpoint that heterogeneous and complex features emerge from homogeneous and less complex components through the interactions among them. Thus, our results imply that highly structured tissues or organs in morphogenesis of multicellular organisms might stem from the criticality of GRNs.
[ 0, 0, 0, 0, 1, 0 ]
Title: Task-Driven Convolutional Recurrent Models of the Visual System, Abstract: Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain's visual system. However, biological visual systems have two ubiquitous architectural features not shared with typical CNNs: local recurrence within cortical areas, and long-range feedback from downstream areas to upstream areas. Here we explored the role of recurrence in improving classification performance. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet task. In contrast, novel cells that incorporated two structural features, bypassing and gating, were able to boost task accuracy substantially. We extended these design principles in an automated search over thousands of model architectures, which identified novel local recurrent cells and long-range feedback connections useful for object recognition. Moreover, these task-optimized ConvRNNs matched the dynamics of neural activity in the primate visual system better than feedforward networks, suggesting a role for the brain's recurrent connections in performing difficult visual behaviors.
[ 0, 0, 0, 0, 1, 0 ]
Title: Self-organization principles of intracellular pattern formation, Abstract: Dynamic patterning of specific proteins is essential for the spatiotemporal regulation of many important intracellular processes in procaryotes, eucaryotes, and multicellular organisms. The emergence of patterns generated by interactions of diffusing proteins is a paradigmatic example for self-organization. In this article we review quantitative models for intracellular Min protein patterns in E. coli, Cdc42 polarization in S. cerevisiae, and the bipolar PAR protein patterns found in C. elegans. By analyzing the molecular processes driving these systems we derive a theoretical perspective on general principles underlying self-organized pattern formation. We argue that intracellular pattern formation is not captured by concepts such as "activators"', "inhibitors", or "substrate-depletion". Instead, intracellular pattern formation is based on the redistribution of proteins by cytosolic diffusion, and the cycling of proteins between distinct conformational states. Therefore, mass-conserving reaction-diffusion equations provide the most appropriate framework to study intracellular pattern formation. We conclude that directed transport, e.g. cytosolic diffusion along an actively maintained cytosolic gradient, is the key process underlying pattern formation. Thus the basic principle of self-organization is the establishment and maintenance of directed transport by intracellular protein dynamics.
[ 0, 0, 0, 0, 1, 0 ]
Title: Randomized Near Neighbor Graphs, Giant Components, and Applications in Data Science, Abstract: If we pick $n$ random points uniformly in $[0,1]^d$ and connect each point to its $k-$nearest neighbors, then it is well known that there exists a giant connected component with high probability. We prove that in $[0,1]^d$ it suffices to connect every point to $ c_{d,1} \log{\log{n}}$ points chosen randomly among its $ c_{d,2} \log{n}-$nearest neighbors to ensure a giant component of size $n - o(n)$ with high probability. This construction yields a much sparser random graph with $\sim n \log\log{n}$ instead of $\sim n \log{n}$ edges that has comparable connectivity properties. This result has nontrivial implications for problems in data science where an affinity matrix is constructed: instead of picking the $k-$nearest neighbors, one can often pick $k' \ll k$ random points out of the $k-$nearest neighbors without sacrificing efficiency. This can massively simplify and accelerate computation, we illustrate this with several numerical examples.
[ 1, 0, 0, 1, 0, 0 ]
Title: Theory of mechano-chemical patterning in biphasic biological tissues, Abstract: The formation of self-organized patterns is key to the morphogenesis of multicellular organisms, although a comprehensive theory of biological pattern formation is still lacking. Here, we propose a minimal model combining tissue mechanics to morphogen turnover and transport in order to explore new routes to patterning. Our active description couples morphogen reaction-diffusion, which impact on cell differentiation and tissue mechanics, to a two-phase poroelastic rheology, where one tissue phase consists of a poroelastic cell network and the other of a permeating extracellular fluid, which provides a feedback by actively transporting morphogens. While this model encompasses previous theories approximating tissues to inert monophasic media, such as Turing's reaction-diffusion model, it overcomes some of their key limitations permitting pattern formation via any two-species biochemical kinetics thanks to mechanically induced cross-diffusion flows. Moreover, we describe a qualitatively different advection-driven Keller-Segel instability which allows for the formation of patterns with a single morphogen, and whose fundamental mode pattern robustly scales with tissue size. We discuss the potential relevance of these findings for tissue morphogenesis.
[ 0, 0, 0, 0, 1, 0 ]
Title: Diffusion time dependence of microstructural parameters in fixed spinal cord, Abstract: Biophysical modelling of diffusion MRI is necessary to provide specific microstructural tissue properties. However, estimating model parameters from data with limited diffusion gradient strength, such as clinical scanners, has proven unreliable due to a shallow optimization landscape. On the other hand, estimation of diffusion kurtosis (DKI) parameters is more robust as the clinical acquisitions typically probe a regime in which the associated 4th order cumulant expansion is adequate; however, its parameters are not microstructurally specific a priori. Given an appropriate biophysical model, its parameters may be connected to DKI parameters, but it was previously shown that at the DKI level, it still does not provide sufficient information to uniquely determine all model parameters. Earlier work has shown that by neglecting axonal dispersion, this parameter degeneracy reduces to the question of whether intra-axonal diffusivity is larger than or smaller than extra-axonal diffusivity. Here we develop a model of diffusion in spinal cord white matter including axonal dispersion and demonstrate stable estimation of all model parameters from DKI. By employing the recently developed fast axisymmetric DKI, we use stimulated echo acquisition mode to collect data over an unprecedented diffusion time range with very narrow diffusion gradient pulses, enabling finely resolved measurements of diffusion time dependence of both net diffusion and kurtosis metrics, as well as model intra- and extra-axonal diffusivities, and axonal dispersion. Our results demonstrate substantial time dependence of all parameters except volume fractions, and the additional time dimension provides support for intra-axonal diffusivity to be larger than extra-axonal diffusivity in spinal cord white matter, although not unambiguously. We compare our findings to predictions from effective medium theory.
[ 0, 1, 0, 0, 0, 0 ]
Title: Optimizing Epistemic Model Checking Using Conditional Independence (Extended Abstract), Abstract: This paper shows that conditional independence reasoning can be applied to optimize epistemic model checking, in which one verifies that a model for a number of agents operating with imperfect information satisfies a formula expressed in a modal multi-agent logic of knowledge. The optimization has been implemented in the epistemic model checker MCK. The paper reports experimental results demonstrating that it can yield multiple orders of magnitude performance improvements.
[ 1, 0, 0, 0, 0, 0 ]
Title: Quickest Localization of Anomalies in Power Grids: A Stochastic Graphical Framework, Abstract: Agile localization of anomalous events plays a pivotal role in enhancing the overall reliability of the grid and avoiding cascading failures. This is especially of paramount significance in the large-scale grids due to their geographical expansions and the large volume of data generated. This paper proposes a stochastic graphical framework, by leveraging which it aims to localize the anomalies with the minimum amount of data. This framework capitalizes on the strong correlation structures observed among the measurements collected from different buses. The proposed approach, at its core, collects the measurements sequentially and progressively updates its decision about the location of the anomaly. The process resumes until the location of the anomaly can be identified with desired reliability. We provide a general theory for the quickest anomaly localization and also investigate its application for quickest line outage localization. Simulations in the IEEE 118-bus model are provided to establish the gains of the proposed approach.
[ 1, 0, 0, 1, 0, 0 ]
Title: The trouble with tensor ring decompositions, Abstract: The tensor train decomposition decomposes a tensor into a "train" of 3-way tensors that are interconnected through the summation of auxiliary indices. The decomposition is stable, has a well-defined notion of rank and enables the user to perform various linear algebra operations on vectors and matrices of exponential size in a computationally efficient manner. The tensor ring decomposition replaces the train by a ring through the introduction of one additional auxiliary variable. This article discusses a major issue with the tensor ring decomposition: its inability to compute an exact minimal-rank decomposition from a decomposition with sub-optimal ranks. Both the contraction operation and Hadamard product are motivated from applications and it is shown through simple examples how the tensor ring-rounding procedure fails to retrieve minimal-rank decompositions with these operations. These observations, together with the already known issue of not being able to find a best low-rank tensor ring approximation to a given tensor indicate that the applicability of tensor rings is severely limited.
[ 1, 0, 0, 0, 0, 0 ]
Title: Direct Measurement of Kramers Turnover with a Levitated Nanoparticle, Abstract: Understanding the thermally activated escape from a metastable state is at the heart of important phenomena such as the folding dynamics of proteins, the kinetics of chemical reactions or the stability of mechanical systems. In 1940 Kramers calculated escape rates both in the high damping and the low damping regime and suggested that the rate must have a maximum for intermediate damping. This phenomenon, today known as the Kramers turnover, has triggered important theoretical and numerical studies. However, to date there is no direct and quantitative experimental verification of this turnover. Using a nanoparticle trapped in a bi-stable optical potential we experimentally measure the nanoparticle's transition rates for variable damping and directly resolve the Kramers turnover. Our measurements are in agreement with an analytical model that is free of adjustable parameters.
[ 0, 1, 0, 0, 0, 0 ]
Title: An estimator for the tail-index of graphex processes, Abstract: Sparse exchangeable graphs resolve some pathologies in traditional random graph models, notably, providing models that are both projective and allow sparsity. In a recent paper, Caron and Rousseau (2017) show that for a large class of sparse exchangeable models, the sparsity behaviour is governed by a single parameter: the tail-index of the function (the graphon) that parameterizes the model. We propose an estimator for this parameter and quantify its risk. Our estimator is a simple, explicit function of the degrees of the observed graph. In many situations of practical interest, the risk decays polynomially in the size of the observed graph. Accordingly, the estimator is practically useful for estimation of sparse exchangeable models. We also derive the analogous results for the bipartite sparse exchangeable case.
[ 0, 0, 1, 1, 0, 0 ]
Title: General tête-à-tête graphs and Seifert manifolds, Abstract: Tête-à-tête graphs and relative tête-à-tête graphs were introduced by N. A'Campo in 2010 to model monodromies of isolated plane curves. By recent workof Fdez de Bobadilla, Pe Pereira and the author, they provide a way of modeling the periodic mapping classes that leave some boundary component invariant. In this work we introduce the notion of general tête-à-tête graph and prove that they model all periodic mapping classes. We also describe algorithms that take a Seifert manifold and a horizontal surface and return a tête-à-tête graph and vice versa.
[ 0, 0, 1, 0, 0, 0 ]