text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Optimal Control for Constrained Coverage Path Planning, Abstract: The problem of constrained coverage path planning involves a robot trying to cover maximum area of an environment under some constraints that appear as obstacles in the map. Out of the several coverage path planning methods, we consider augmenting the linear sweep-based coverage method to achieve minimum energy/ time optimality along with maximum area coverage. In addition, we also study the effects of variation of different parameters on the performance of the modified method.
[ 1, 0, 0, 0, 0, 0 ]
Title: Annihilating wild kernels, Abstract: Let $L/K$ be a finite Galois extension of number fields with Galois group $G$. Let $p$ be an odd prime and $r>1$ be an integer. Assuming a conjecture of Schneider, we formulate a conjecture that relates special values of equivariant Artin $L$-series at $s=r$ to the compact support cohomology of the étale $p$-adic sheaf $\mathbb Z_p(r)$. We show that our conjecture is essentially equivalent to the $p$-part of the equivariant Tamagawa number conjecture for the pair $(h^0(\mathrm{Spec}(L))(r), \mathbb Z[G])$. We derive from this explicit constraints on the Galois module structure of Banaszak's $p$-adic wild kernels.
[ 0, 0, 1, 0, 0, 0 ]
Title: Trading the Twitter Sentiment with Reinforcement Learning, Abstract: This paper is to explore the possibility to use alternative data and artificial intelligence techniques to trade stocks. The efficacy of the daily Twitter sentiment on predicting the stock return is examined using machine learning methods. Reinforcement learning(Q-learning) is applied to generate the optimal trading policy based on the sentiment signal. The predicting power of the sentiment signal is more significant if the stock price is driven by the expectation of the company growth and when the company has a major event that draws the public attention. The optimal trading strategy based on reinforcement learning outperforms the trading strategy based on the machine learning prediction.
[ 1, 0, 0, 0, 0, 0 ]
Title: Hemihelical local minimizers in prestrained elastic bi-strips, Abstract: We consider a double layered prestrained elastic rod in the limit of vanishing cross section. For the resulting limit Kirchoff-rod model with intrinsic curvature we prove a supercritical bifurcation result, rigorously showing the emergence of a branch of hemihelical local minimizers from the straight configuration, at a critical force and under clamping at both ends. As a consequence we obtain the existence of nontrivial local minimizers of the $3$-d system.
[ 0, 0, 1, 0, 0, 0 ]
Title: Nonlinear Information Bottleneck, Abstract: Information bottleneck [IB] is a technique for extracting information in some `input' random variable that is relevant for predicting some different 'output' random variable. IB works by encoding the input in a compressed 'bottleneck variable' from which the output can then be accurately decoded. IB can be difficult to compute in practice, and has been mainly developed for two limited cases: (1) discrete random variables with small state spaces, and (2) continuous random variables that are jointly Gaussian distributed (in which case the encoding and decoding maps are linear). We propose a method to perform IB in more general domains. Our approach can be applied to discrete or continuous inputs and outputs, and allows for nonlinear encoding and decoding maps. The method uses a novel upper bound on the IB objective, derived using a non-parametric estimator of mutual information and a variational approximation. We show how to implement the method using neural networks and gradient-based optimization, and demonstrate its performance on the MNIST dataset.
[ 1, 0, 0, 1, 0, 0 ]
Title: Top-k Overlapping Densest Subgraphs: Approximation and Complexity, Abstract: A central problem in graph mining is finding dense subgraphs, with several applications in different fields, a notable example being identifying communities. While a lot of effort has been put on the problem of finding a single dense subgraph, only recently the focus has been shifted to the problem of finding a set of dens- est subgraphs. Some approaches aim at finding disjoint subgraphs, while in many real-world networks communities are often overlapping. An approach introduced to find possible overlapping subgraphs is the Top-k Overlapping Densest Subgraphs problem. For a given integer k >= 1, the goal of this problem is to find a set of k densest subgraphs that may share some vertices. The objective function to be maximized takes into account both the density of the subgraphs and the distance between subgraphs in the solution. The Top-k Overlapping Densest Subgraphs problem has been shown to admit a 1/10-factor approximation algorithm. Furthermore, the computational complexity of the problem has been left open. In this paper, we present contributions concerning the approximability and the computational complexity of the problem. For the approximability, we present approximation algorithms that improves the approximation factor to 1/2 , when k is bounded by the vertex set, and to 2/3 when k is a constant. For the computational complexity, we show that the problem is NP-hard even when k = 3.
[ 1, 0, 0, 0, 0, 0 ]
Title: Pinhole induced efficiency variation in perovskite solar cells, Abstract: Process induced efficiency variation is a major concern for all thin film solar cells, including the emerging perovskite based solar cells. In this manuscript, we address the effect of pinholes or process induced surface coverage aspects on the efficiency of such solar cells through detailed numerical simulations. Interestingly, we find the pinhole size distribution affects the short circuit current and open circuit voltage in contrasting manners. Specifically, while the Jsc is heavily dependent on the pinhole size distribution, surprisingly, the Voc seems to be only nominally affected by it. Further, our simulations also indicate that, with appropriate interface engineering, it is indeed possible to design a nanostructured device with efficiencies comparable to that of ideal planar structures. Additionally, we propose a simple technique based on terminal IV characteristics to estimate the surface coverage in perovskite solar cells.
[ 0, 1, 0, 0, 0, 0 ]
Title: An Overview of Robust Subspace Recovery, Abstract: This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a dataset that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.
[ 0, 0, 0, 1, 0, 0 ]
Title: Interval-type theorems concerning quasi-arithmetic means, Abstract: Family of quasi-arithmetic means has a natural, partial order (point-wise order) $A^{[f]}\le A^{[g]}$ if and only if $A^{[f]}(v)\le A^{[g]}(v)$ for all admissible vectors $v$ ($f,\,g$ and, later, $h$ are continuous and monotone and defined on a common interval). Therefore one can introduce the notion of interval-type sets (sets $\mathcal{I}$ such that whenever $A^{[f]} \le A^{[h]} \le A^{[g]}$ for some $A^{[f]},\,A^{[g]} \in \mathcal{I}$ then $A^{[h]} \in \mathcal{I}$ too). Our aim is to give examples of interval-type sets involving vary smoothness assumptions of generating functions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Integrable flows between exact CFTs, Abstract: We explicitly construct families of integrable $\sigma$-model actions smoothly interpolating between exact CFTs. In the ultraviolet the theory is the direct product of two current algebras at levels $k_1$ and $k_2$. In the infrared and for the case of two deformation matrices the CFT involves a coset CFT, whereas for a single matrix deformation it is given by the ultraviolet direct product theories but at levels $k_1$ and $k_2-k_1$. For isotropic deformations we demonstrate integrability. In this case we also compute the exact beta-function for the deformation parameters using gravitational methods. This is shown to coincide with previous results obtained using perturbation theory and non-perturbative symmetries.
[ 0, 1, 0, 0, 0, 0 ]
Title: Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering, Abstract: Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.
[ 1, 0, 0, 1, 0, 0 ]
Title: Relational Algebra for In-Database Process Mining, Abstract: The execution logs that are used for process mining in practice are often obtained by querying an operational database and storing the result in a flat file. Consequently, the data processing power of the database system cannot be used anymore for this information, leading to constrained flexibility in the definition of mining patterns and limited execution performance in mining large logs. Enabling process mining directly on a database - instead of via intermediate storage in a flat file - therefore provides additional flexibility and efficiency. To help facilitate this ideal of in-database process mining, this paper formally defines a database operator that extracts the 'directly follows' relation from an operational database. This operator can both be used to do in-database process mining and to flexibly evaluate process mining related queries, such as: "which employee most frequently changes the 'amount' attribute of a case from one task to the next". We define the operator using the well-known relational algebra that forms the formal underpinning of relational databases. We formally prove equivalence properties of the operator that are useful for query optimization and present time-complexity properties of the operator. By doing so this paper formally defines the necessary relational algebraic elements of a 'directly follows' operator, which are required for implementation of such an operator in a DBMS.
[ 1, 0, 0, 0, 0, 0 ]
Title: Conditional Optimal Stopping: A Time-Inconsistent Optimization, Abstract: Inspired by recent work of P.-L. Lions on conditional optimal control, we introduce a problem of optimal stopping under bounded rationality: the objective is the expected payoff at the time of stopping, conditioned on another event. For instance, an agent may care only about states where she is still alive at the time of stopping, or a company may condition on not being bankrupt. We observe that conditional optimization is time-inconsistent due to the dynamic change of the conditioning probability and develop an equilibrium approach in the spirit of R. H. Strotz' work for sophisticated agents in discrete time. Equilibria are found to be essentially unique in the case of a finite time horizon whereas an infinite horizon gives rise to non-uniqueness and other interesting phenomena. We also introduce a theory which generalizes the classical Snell envelope approach for optimal stopping by considering a pair of processes with Snell-type properties.
[ 0, 0, 0, 0, 0, 1 ]
Title: Principles for optimal cooperativity in allosteric materials, Abstract: Allosteric proteins transmit a mechanical signal induced by binding a ligand. However, understanding the nature of the information transmitted and the architectures optimizing such transmission remains a challenge. Here we show using an {\it in-silico} evolution scheme and theoretical arguments that architectures optimized to be cooperative, which propagate efficiently energy, {qualitatively} differ from previously investigated materials optimized to propagate strain. Although we observe a large diversity of functioning cooperative architectures (including shear, hinge and twist designs), they all obey the same principle {of displaying a {\it mechanism}, i.e. an extended {soft} mode}. We show that its optimal frequency decreases with the spatial extension $L$ of the system as $L^{-d/2}$, where $d$ is the spatial dimension. For these optimal designs, cooperativity decays logarithmically with $L$ for $d=2$ and does not decay for $d=3$. Overall our approach leads to a natural explanation for several observations in allosteric proteins, and { indicates an experimental path to test if allosteric proteins lie close to optimality}.
[ 0, 1, 0, 0, 0, 0 ]
Title: Test of SensL SiPM coated with NOL-1 wavelength shifter in liquid xenon, Abstract: A SensL MicroFC-SMT-60035 6x6 mm$^2$ silicon photo-multiplier coated with a NOL-1 wavelength shifter have been tested in the liquid xenon to detect the 175-nm scintillation light. For comparison, a Hamamatsu vacuum ultraviolet sensitive MPPC VUV3 3x3 mm$^2$ was tested under the same conditions. The photodetection efficiency of $13.1 \pm 2.5$% and $6.0 \pm 1.0$%, correspondingly, is obtained.
[ 0, 1, 0, 0, 0, 0 ]
Title: Geometrical Insights for Implicit Generative Modeling, Abstract: Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion. A careful look at the geometries induced by these distances on the space of probability measures reveals interesting differences. In particular, we can establish surprising approximate global convergence guarantees for the $1$-Wasserstein distance,even when the parametric generator has a nonconvex parametrization.
[ 1, 0, 0, 1, 0, 0 ]
Title: Scale-dependent perturbations finally detectable by future galaxy surveys and their contribution to cosmological model selection, Abstract: By means of the present geometrical and dynamical observational data, it is very hard to establish, from a statistical perspective, a clear preference among the vast majority of the proposed models for the dynamical dark energy and/or modified gravity theories alternative with respect to the $\Lambda$CDM scenario. On the other hand, on scales much smaller than present Hubble scale, there are possibly detectable differences in the growth of the matter perturbations for different modes of the perturbations, even in the context of the $\Lambda$CDM model. Here, we analyze the evolution of the dark matter perturbations in the context of $\Lambda$CDM and some dynamical dark energy models involving future cosmological singularities, such as the sudden future singularity and the finite scale factor singularity. We employ the Newtonian gauge formulation for the derivation of the perturbation equations for the growth function, and we abandon both the sub-Hubble approximation and the slowly varying potential assumption. We apply the Fisher Matrix approach to three future planned galaxy surveys e.g., DESI, Euclid, and WFirst-2.4. With the mentioned surveys on hand, only with the dynamical probes, we will achieve multiple goals: $1.$ the improvement in the accuracy of the determination of the $f\sigma_{8}$ will give the possibility to discriminate between the $\Lambda$CDM and the alternative dark energy models even in the scale-independent approach; $2.$ it will be possible to test the goodness of the scale-independence finally, and also to quantify the necessity of a scale dependent approach to the growth of the perturbations, in particular using surveys which encompass redshift bins with scales $k<0.005\,h$ Mpc$^{-1}$; $3.$ the scale-dependence itself might add much more discriminating power in general, but further advanced surveys will be needed.
[ 0, 1, 0, 0, 0, 0 ]
Title: Quadratic twists of abelian varieties and disparity in Selmer ranks, Abstract: We study the parity of 2-Selmer ranks in the family of quadratic twists of a fixed principally polarised abelian variety over a number field. Specifically, we determine the proportion of twists having odd (resp. even) 2-Selmer rank. This generalises work of Klagsbrun--Mazur--Rubin for elliptic curves and Yu for Jacobians of hyperelliptic curves. Several differences in the statistics arise due to the possibility that the Shafarevich--Tate group (if finite) may have order twice a square. In particular, the statistics for parities of 2-Selmer ranks and 2-infinity Selmer ranks need no longer agree and we describe both.
[ 0, 0, 1, 0, 0, 0 ]
Title: From acquaintance to best friend forever: robust and fine-grained inference of social tie strengths, Abstract: Social networks often provide only a binary perspective on social ties: two individuals are either connected or not. While sometimes external information can be used to infer the strength of social ties, access to such information may be restricted or impractical. Sintos and Tsaparas (KDD 2014) first suggested to infer the strength of social ties from the topology of the network alone, by leveraging the Strong Triadic Closure (STC) property. The STC property states that if person A has strong social ties with persons B and C, B and C must be connected to each other as well (whether with a weak or strong tie). Sintos and Tsaparas exploited this to formulate the inference of the strength of social ties as NP-hard optimization problem, and proposed two approximation algorithms. We refine and improve upon this landmark paper, by developing a sequence of linear relaxations of this problem that can be solved exactly in polynomial time. Usefully, these relaxations infer more fine-grained levels of tie strength (beyond strong and weak), which also allows to avoid making arbitrary strong/weak strength assignments when the network topology provides inconclusive evidence. One of the relaxations simultaneously infers the presence of a limited number of STC violations. An extensive theoretical analysis leads to two efficient algorithmic approaches. Finally, our experimental results elucidate the strengths of the proposed approach, and sheds new light on the validity of the STC property in practice.
[ 1, 0, 0, 0, 0, 0 ]
Title: Conditional bias robust estimation of the total of curve data by sampling in a finite population: an illustration on electricity load curves, Abstract: For marketing or power grid management purposes, many studies based on the analysis of the total electricity consumption curves of groups of customers are now carried out by electricity companies. Aggregated total or mean load curves are estimated using individual curves measured at fine time grid and collected according to some sampling design. Due to the skewness of the distribution of electricity consumptions, these samples often contain outlying curves which may have an important impact on the usual estimation procedures. We introduce several robust estimators of the total consumption curve which are not sensitive to such outlying curves. These estimators are based on the conditional bias approach and robust functional methods. We also derive mean square error estimators of these robust estimators and finally, we evaluate and compare the performance of the suggested estimators on Irish electricity data.
[ 0, 0, 0, 1, 0, 0 ]
Title: $k$-shellable simplicial complexes and graphs, Abstract: In this paper we show that a $k$-shellable simplicial complex is the expansion of a shellable complex. We prove that the face ring of a pure $k$-shellable simplicial complex satisfies the Stanley conjecture. In this way, by applying expansion functor to the face ring of a given pure shellable complex, we construct a large class of rings satisfying the Stanley conjecture. Also, by presenting some characterizations of $k$-shellable graphs, we extend some results due to Castrillón-Cruz, Cruz-Estrada and Van Tuyl-Villareal.
[ 0, 0, 1, 0, 0, 0 ]
Title: The Effect of Phasor Measurement Units on the Accuracy of the Network Estimated Variables, Abstract: The most commonly used weighted least square state estimator in power industry is nonlinear and formulated by using conventional measurements such as line flow and injection measurements. PMUs (Phasor Measurement Units) are gradually adding them to improve the state estimation process. In this paper the way of corporation the PMU data to the conventional measurements and a linear formulation of the state estimation using only PMU measured data are investigated. Six cases are tested while gradually increasing the number of PMUs which are added to the measurement set and the effect of PMUs on the accuracy of variables are illustrated and compared by applying them on IEEE 14, 30 test systems.
[ 1, 0, 1, 0, 0, 0 ]
Title: Cosmological model discrimination with Deep Learning, Abstract: We demonstrate the potential of Deep Learning methods for measurements of cosmological parameters from density fields, focusing on the extraction of non-Gaussian information. We consider weak lensing mass maps as our dataset. We aim for our method to be able to distinguish between five models, which were chosen to lie along the $\sigma_8$ - $\Omega_m$ degeneracy, and have nearly the same two-point statistics. We design and implement a Deep Convolutional Neural Network (DCNN) which learns the relation between five cosmological models and the mass maps they generate. We develop a new training strategy which ensures the good performance of the network for high levels of noise. We compare the performance of this approach to commonly used non-Gaussian statistics, namely the skewness and kurtosis of the convergence maps. We find that our implementation of DCNN outperforms the skewness and kurtosis statistics, especially for high noise levels. The network maintains the mean discrimination efficiency greater than $85\%$ even for noise levels corresponding to ground based lensing observations, while the other statistics perform worse in this setting, achieving efficiency less than $70\%$. This demonstrates the ability of CNN-based methods to efficiently break the $\sigma_8$ - $\Omega_m$ degeneracy with weak lensing mass maps alone. We discuss the potential of this method to be applied to the analysis of real weak lensing data and other datasets.
[ 0, 1, 0, 1, 0, 0 ]
Title: Audio Super Resolution using Neural Networks, Abstract: We introduce a new audio processing technique that increases the sampling rate of signals such as speech or music using deep convolutional neural networks. Our model is trained on pairs of low and high-quality audio examples; at test-time, it predicts missing samples within a low-resolution signal in an interpolation process similar to image super-resolution. Our method is simple and does not involve specialized audio processing techniques; in our experiments, it outperforms baselines on standard speech and music benchmarks at upscaling ratios of 2x, 4x, and 6x. The method has practical applications in telephony, compression, and text-to-speech generation; it demonstrates the effectiveness of feed-forward convolutional architectures on an audio generation task.
[ 1, 0, 0, 0, 0, 0 ]
Title: A natural framework for isogeometric fluid-structure interaction based on BEM-shell coupling, Abstract: The interaction between thin structures and incompressible Newtonian fluids is ubiquitous both in nature and in industrial applications. In this paper we present an isogeometric formulation of such problems which exploits a boundary integral formulation of Stokes equations to model the surrounding flow, and a non linear Kirchhoff-Love shell theory to model the elastic behaviour of the structure. We propose three different coupling strategies: a monolithic, fully implicit coupling, a staggered, elasticity driven coupling, and a novel semi-implicit coupling, where the effect of the surrounding flow is incorporated in the non-linear terms of the solid solver through its damping characteristics. The novel semi-implicit approach is then used to demonstrate the power and robustness of our method, which fits ideally in the isogeometric paradigm, by exploiting only the boundary representation (B-Rep) of the thin structure middle surface.
[ 0, 1, 1, 0, 0, 0 ]
Title: Inertial Effects on the Stress Generation of Active Fluids, Abstract: Suspensions of self-propelled bodies generate a unique mechanical stress owing to their motility that impacts their large-scale collective behavior. For microswimmers suspended in a fluid with negligible particle inertia, we have shown that the virial `swim stress' is a useful quantity to understand the rheology and nonequilibrium behaviors of active soft matter systems. For larger self-propelled organisms like fish, it is unclear how particle inertia impacts their stress generation and collective movement. Here, we analyze the effects of finite particle inertia on the mechanical pressure (or stress) generated by a suspension of self-propelled bodies. We find that swimmers of all scales generate a unique `swim stress' and `Reynolds stress' that impacts their collective motion. We discover that particle inertia plays a similar role as confinement in overdamped active Brownian systems, where the reduced run length of the swimmers decreases the swim stress and affects the phase behavior. Although the swim and Reynolds stresses vary individually with the magnitude of particle inertia, the sum of the two contributions is independent of particle inertia. This points to an important concept when computing stresses in computer simulations of nonequilibrium systems---the Reynolds and the virial stresses must both be calculated to obtain the overall stress generated by a system.
[ 0, 1, 0, 0, 0, 0 ]
Title: A simple descriptor and predictor for the stable structures of two-dimensional surface alloys, Abstract: Predicting the ground state of alloy systems is challenging due to the large number of possible configurations. We identify an easily computed descriptor for the stability of binary surface alloys, the effective coordination number $\mathscr{E}$. We show that $\mathscr{E}(M)$ correlates well with the enthalpy of mixing, from density functional theory (DFT) calculations on $M_x$Au$_{1-x}$/Ru [$M$ = Mn or Fe]. At each $x$, the most favored structure has the highest [lowest] value of $\mathscr{E}(M)$ if the system is non-magnetic [ferromagnetic]. Importantly, little accuracy is lost upon replacing $\mathscr{E}(M)$ by $\mathscr{E}^*(M)$, which can be quickly computed without performing a DFT calculation, possibly offering a simple alternative to the frequently used cluster expansion method.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fractional integrals and Fourier transforms, Abstract: This paper gives a short survey of some basic results related to estimates of fractional integrals and Fourier transforms. It is closely adjoint to our previous survey papers \cite{K1998} and \cite{K2007}. The main methods used in the paper are based on nonincreasing rearrangements. We give alternative proofs of some results. We observe also that the paper represents the mini-course given by the author at Barcelona University in October, 2014.
[ 0, 0, 1, 0, 0, 0 ]
Title: Multi-Level Network Embedding with Boosted Low-Rank Matrix Approximation, Abstract: As opposed to manual feature engineering which is tedious and difficult to scale, network representation learning has attracted a surge of research interests as it automates the process of feature learning on graphs. The learned low-dimensional node vector representation is generalizable and eases the knowledge discovery process on graphs by enabling various off-the-shelf machine learning tools to be directly applied. Recent research has shown that the past decade of network embedding approaches either explicitly factorize a carefully designed matrix to obtain the low-dimensional node vector representation or are closely related to implicit matrix factorization, with the fundamental assumption that the factorized node connectivity matrix is low-rank. Nonetheless, the global low-rank assumption does not necessarily hold especially when the factorized matrix encodes complex node interactions, and the resultant single low-rank embedding matrix is insufficient to capture all the observed connectivity patterns. In this regard, we propose a novel multi-level network embedding framework BoostNE, which can learn multiple network embedding representations of different granularity from coarse to fine without imposing the prevalent global low-rank assumption. The proposed BoostNE method is also in line with the successful gradient boosting method in ensemble learning as multiple weak embeddings lead to a stronger and more effective one. We assess the effectiveness of the proposed BoostNE framework by comparing it with existing state-of-the-art network embedding methods on various datasets, and the experimental results corroborate the superiority of the proposed BoostNE network embedding framework.
[ 1, 0, 0, 1, 0, 0 ]
Title: Vision and Challenges for Knowledge Centric Networking (KCN), Abstract: In the creation of a smart future information society, Internet of Things (IoT) and Content Centric Networking (CCN) break two key barriers for both the front-end sensing and back-end networking. However, we still observe the missing piece of the research that dominates the current networking traffic control and system management, e.g., lacking of the knowledge penetrated into both sensing and networking to glue them holistically. In this paper, we envision to leverage emerging machine learning or deep learning techniques to create aspects of knowledge for facilitating the designs. In particular, we can extract knowledge from collected data to facilitate reduced data volume, enhanced system intelligence and interactivity, improved service quality, communication with better controllability and lower cost. We name such a knowledge-oriented traffic control and networking management paradigm as the Knowledge Centric Networking (KCN). This paper presents KCN rationale, KCN benefits, related works and research opportunities.
[ 1, 0, 0, 0, 0, 0 ]
Title: Extracting Geometry from Quantum Spacetime: Obstacles down the road, Abstract: Any acceptable quantum gravity theory must allow us to recover the classical spacetime in the appropriate limit. Moreover, the spacetime geometrical notions should be intrinsically tied to the behavior of the matter that probes them. We consider some difficulties that would be confronted in attempting such an enterprise. The problems we uncover seem to go beyond the technical level to the point of questioning the overall feasibility of the project. The main issue is related to the fact that, in the quantum theory, it is impossible to assign a trajectory to a physical object, and, on the other hand, according to the basic tenets of the geometrization of gravity, it is precisely the trajectories of free localized objects that define the spacetime geometry. The insights gained in this analysis should be relevant to those interested in the quest for a quantum theory of gravity and might help refocus some of its goals.
[ 0, 1, 0, 0, 0, 0 ]
Title: Autoencoder Based Sample Selection for Self-Taught Learning, Abstract: Self-taught learning is a technique that uses a large number of unlabeled data as source samples to improve the task performance on target samples. Compared with other transfer learning techniques, self-taught learning can be applied to a broader set of scenarios due to the loose restrictions on source data. However, knowledge transferred from source samples that are not sufficiently related to the target domain may negatively influence the target learner, which is referred to as negative transfer. In this paper, we propose a metric for the relevance between a source sample and target samples. To be more specific, both source and target samples are reconstructed through a single-layer autoencoder with a linear relationship between source samples and target samples simultaneously enforced. An l_{2,1}-norm sparsity constraint is imposed on the transformation matrix to identify source samples relevant to the target domain. Source domain samples that are deemed relevant are assigned pseudo-labels reflecting their relevance to target domain samples, and are combined with target samples in order to provide an expanded training set for classifier training. Local data structures are also preserved during source sample selection through spectral graph analysis. Promising results in extensive experiments show the advantages of the proposed approach.
[ 0, 0, 0, 1, 0, 0 ]
Title: Shutting down or powering up a (U)LIRG? Merger components in distinctly different evolutionary states in IRAS 19115-2124 (The Bird), Abstract: We present new SINFONI near-infrared integral field unit (IFU) spectroscopy and SALT optical long-slit spectroscopy characterising the history of a nearby merging luminous infrared galaxy, dubbed the Bird (IRAS19115-2114). The NIR line-ratio maps of the IFU data-cubes and stellar population fitting of the SALT spectra now allow dating of the star formation (SF) over the triple system uncovered from our previous adaptive optics data. The distinct components separate very clearly in a line-ratio diagnostic diagram. An off-nuclear pure starburst dominates the current SF of the Bird with 60-70% of the total, with a 4-7 Myr age, and signs of a fairly constant long-term star formation of the underlying stellar population. The most massive nucleus, in contrast, is quenched with a starburst age of >40 Myr and shows hints of budding AGN activity. The secondary massive nucleus is at an intermediate stage. The two major components have a population of older stars, consistent with a starburst triggered 1 Gyr ago in a first encounter. The simplest explanation of the history is that of a triple merger, where the strongly star forming component has joined later. We detect multiple gas flows in different phases. The Bird offers an opportunity to witness multiple stages of galaxy evolution in the same system; triggering as well as quenching of SF, and the early appearance of AGN activity. It also serves as a cautionary note on interpretations of observations with lower spatial resolution and/or without infrared data. At high-redshift the system would look like a clumpy starburst with crucial pieces of its puzzle hidden, in danger of misinterpretations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Asymptotics to all orders of the Hurwitz zeta function, Abstract: We present several formulae for the large-$t$ asymptotics of the modified Hurwitz zeta function $\zeta_1(x,s),x>0,s=\sigma+it,0<\sigma\leq1,t>0,$ which are valid to all orders. In the case of $x=0$, these formulae reduce to the asymptotic expressions recently obtained for the Riemann zeta function, which include the classical results of Siegel as a particular case.
[ 0, 0, 1, 0, 0, 0 ]
Title: Distributed Stochastic Approximation with Local Projections, Abstract: We propose a distributed version of a stochastic approximation scheme constrained to remain in the intersection of a finite family of convex sets. The projection to the intersection of these sets is also computed in a distributed manner and a `nonlinear gossip' mechanism is employed to blend the projection iterations with the stochastic approximation using multiple time scales
[ 1, 0, 0, 0, 0, 0 ]
Title: Ultra-Low Noise Amplifier Design for Magnetic Resonance Imaging systems, Abstract: This paper demonstrates designing and developing of an Ultra-Low Noise Amplifier which should potentially increase the sensitivity of the existing Magnetic Resonance Imaging (MRI) systems. The Design of the LNA is fabricated and characterized including matching and input high power protection circuits. The estimate improvement of SNR of the LNA in comparison to room temperature operation is taken here. The Cascode amplifier topology is chosen to be investigated for high performance Low Noise amplifier design and for the fabrication. The fabricated PCB layout of the Cascode LNA is tested by using measurement instruments Spectrum Analyser and Vector Network analyzer. The measurements of fabricated PCB layout of the Cascode LNA at room temperature had the following performance, the operation frequency is 32 MHz, the noise figure is 0.45 dB at source impedance 50 {\Omega}, the gain is 11.6 dB, the output return loss is 21.1 dB, and the input return loss 0.12 dB and it is unconditionally stable for up to 6 GHz band. The goal of the research is achieved where the Cascode LNA had improvement of SNR.
[ 0, 1, 0, 0, 0, 0 ]
Title: Virtual Astronaut for Scientific Visualization - A Prototype for Santa Maria Crater on Mars, Abstract: To support scientific visualization of multiple-mission data from Mars, the Virtual Astronaut (VA) creates an interactive virtual 3D environment built on the Unity3D Game Engine. A prototype study was conducted based on orbital and Opportunity Rover data covering Santa Maria Crater in Meridiani Planum on Mars. The VA at Santa Maria provides dynamic visual representations of the imaging, compositional, and mineralogical information. The VA lets one navigate through the scene and provides geomorphic and geologic contexts for the rover operations. User interactions include in-situ observations visualization, feature measurement, and an animation control of rover drives. This paper covers our approach and implementation of the VA system. A brief summary of the prototype system functions and user feedback is also covered. Based on external review and comments by the science community, the prototype at Santa Maria has proven the VA to be an effective tool for virtual geovisual analysis.
[ 1, 1, 0, 0, 0, 0 ]
Title: Self-Supervised Generalisation with Meta Auxiliary Learning, Abstract: Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manually-labelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network's performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at \url{this https URL}.
[ 1, 0, 0, 1, 0, 0 ]
Title: A Study on Arbitrarily Varying Channels with Causal Side Information at the Encoder, Abstract: In this work, we study two models of arbitrarily varying channels, when causal side information is available at the encoder in a causal manner. First, we study the arbitrarily varying channel (AVC) with input and state constraints, when the encoder has state information in a causal manner. Lower and upper bounds on the random code capacity are developed. A lower bound on the deterministic code capacity is established in the case of a message-averaged input constraint. In the setting where a state constraint is imposed on the jammer, while the user is under no constraints, the random code bounds coincide, and the random code capacity is determined. Furthermore, for this scenario, a generalized non-symmetrizability condition is stated, under which the deterministic code capacity coincides with the random code capacity. A second model considered in our work is the arbitrarily varying degraded broadcast channel with causal side information at the encoder (without constraints). We establish inner and outer bounds on both the random code capacity region and the deterministic code capacity region. The capacity region is then determined for a class of channels satisfying a condition on the mutual informations between the strategy variables and the channel outputs. As an example, we show that the condition holds for the arbitrarily varying binary symmetric broadcast channel, and we find the corresponding capacity region.
[ 1, 0, 1, 0, 0, 0 ]
Title: Radon background in liquid xenon detectors, Abstract: The radioactive daughters isotope of 222Rn are one of the highest risk contaminants in liquid xenon detectors aiming for a small signal rate. The noble gas is permanently emanated from the detector surfaces and mixed with the xenon target. Because of its long half-life 222Rn is homogeneously distributed in the target and its subsequent decays can mimic signal events. Since no shielding is possible this background source can be the dominant one in future large scale experiments. This article provides an overview of strategies used to mitigate this source of background by means of material selection and on-line radon removal techniques.
[ 0, 1, 0, 0, 0, 0 ]
Title: Minimax Regret Bounds for Reinforcement Learning, Abstract: We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of $\tilde{O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})$ where $H$ is the time horizon, $S$ the number of states, $A$ the number of actions and $T$ the number of time-steps. This result improves over the best previous known bound $\tilde{O}(HS \sqrt{AT})$ achieved by the UCRL2 algorithm of Jaksch et al., 2010. The key significance of our new results is that when $T\geq H^3S^3A$ and $SA\geq H$, it leads to a regret of $\tilde{O}(\sqrt{HSAT})$ that matches the established lower bound of $\Omega(\sqrt{HSAT})$ up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in $S$), and we define Bernstein-based "exploration bonuses" that use the empirical variance of the estimated values at the next states (to improve scaling in $H$).
[ 1, 0, 0, 1, 0, 0 ]
Title: High Dimensional Structured Superposition Models, Abstract: High dimensional superposition models characterize observations using parameters which can be written as a sum of multiple component parameters, each with its own structure, e.g., sum of low rank and sparse matrices, sum of sparse and rotated sparse vectors, etc. In this paper, we consider general superposition models which allow sum of any number of component parameters, and each component structure can be characterized by any norm. We present a simple estimator for such models, give a geometric condition under which the components can be accurately estimated, characterize sample complexity of the estimator, and give high probability non-asymptotic bounds on the componentwise estimation error. We use tools from empirical processes and generic chaining for the statistical analysis, and our results, which substantially generalize prior work on superposition models, are in terms of Gaussian widths of suitable sets.
[ 1, 0, 0, 1, 0, 0 ]
Title: Crossmatching variable objects with the Gaia data, Abstract: Tens of millions of new variable objects are expected to be identified in over a billion time series from the Gaia mission. Crossmatching known variable sources with those from Gaia is crucial to incorporate current knowledge, understand how these objects appear in the Gaia data, train supervised classifiers to recognise known classes, and validate the results of the Variability Processing and Analysis Coordination Unit (CU7) within the Gaia Data Analysis and Processing Consortium (DPAC). The method employed by CU7 to crossmatch variables for the first Gaia data release includes a binary classifier to take into account positional uncertainties, proper motion, targeted variability signals, and artefacts present in the early calibration of the Gaia data. Crossmatching with a classifier makes it possible to automate all those decisions which are typically made during visual inspection. The classifier can be trained with objects characterized by a variety of attributes to ensure similarity in multiple dimensions (astrometry, photometry, time-series features), with no need for a-priori transformations to compare different photometric bands, or of predictive models of the motion of objects to compare positions. Other advantages as well as some disadvantages of the method are discussed. Implementation steps from the training to the assessment of the crossmatch classifier and selection of results are described.
[ 0, 1, 0, 0, 0, 0 ]
Title: Nonlinear Modal Decoupling Based Power System Transient Stability Analysis, Abstract: Nonlinear modal decoupling (NMD) was recently proposed to nonlinearly transform a multi-oscillator system into a number of decoupled oscillators which together behave the same as the original system in an extended neighborhood of the equilibrium. Each oscillator has just one degree of freedom and hence can easily be analyzed to infer the stability of the original system associated with one electromechanical mode. As the first attempt of applying the NMD methodology to realistic power system models, this paper proposes an NMD-based transient stability analysis approach. For a multi-machine power system, the approach first derives decoupled nonlinear oscillators by a coordinates transformation, and then applies Lyapunov stability analysis to oscillators to assess the stability of the original system. Nonlinear modal interaction is also considered. The approach can be efficiently applied to a large-scale power grid by conducting NMD regarding only selected modes. Case studies on a 3-machine 9-bus system and an NPCC 48-machine 140-bus system show the potentials of the approach in transient stability analysis for multi-machine systems.
[ 1, 0, 0, 0, 0, 0 ]
Title: KELT-18b: Puffy Planet, Hot Host, Probably Perturbed, Abstract: We report the discovery of KELT-18b, a transiting hot Jupiter in a 2.87d orbit around the bright (V=10.1), hot, F4V star BD+60 1538 (TYC 3865-1173-1). We present follow-up photometry, spectroscopy, and adaptive optics imaging that allow a detailed characterization of the system. Our preferred model fits yield a host stellar temperature of 6670+/-120 K and a mass of 1.524+/-0.069 Msun, situating it as one of only a handful of known transiting planets with hosts that are as hot, massive, and bright. The planet has a mass of 1.18+/-0.11 Mjup, a radius of 1.57+/-0.04 Rjup, and a density of 0.377+/-0.040 g/cm^3, making it one of the most inflated planets known around a hot star. We argue that KELT-18b's high temperature and low surface gravity, which yield an estimated ~600 km atmospheric scale height, combined with its hot, bright host make it an excellent candidate for observations aimed at atmospheric characterization. We also present evidence for a bound stellar companion at a projected separation of ~1100 AU, and speculate that it may have contributed to the strong misalignment we suspect between KELT-18's spin axis and its planet's orbital axis. The inferior conjunction time is 2457542.524998 +/-0.000416 (BJD_TDB) and the orbital period is 2.8717510 +/- 0.0000029 days. We encourage Rossiter-McLaughlin measurements in the near future to confirm the suspected spin-orbit misalignment of this system.
[ 0, 1, 0, 0, 0, 0 ]
Title: BAMBI: An R package for Fitting Bivariate Angular Mixture Models, Abstract: Statistical analyses of directional or angular data have applications in a variety of fields, such as geology, meteorology and bioinformatics. There is substantial literature on descriptive and inferential techniques for univariate angular data, with the bivariate (or more generally, multivariate) cases receiving more attention in recent years. However, there is a lack of software implementing inferential techniques in practice, especially in the bivariate situation. In this article, we introduce *BAMBI*, an R package for analyzing bivariate (and univariate) angular data. We implement random generation, density evaluation, and computation of summary measures for three bivariate (viz., bivariate wrapped normal, von Mises sine and von Mises cosine) and two univariate (viz., univariate wrapped normal and von Mises) angular distributions. The major contribution of BAMBI to statistical computing is in providing Bayesian methods for modeling angular data using finite mixtures of these distributions. We also provide functions for visual and numerical diagnostics and Bayesian inference for the fitted models. In this article, we first provide a brief review of the distributions and techniques used in BAMBI, then describe the capabilities of the package, and finally conclude with demonstrations of mixture model fitting using BAMBI on the two real datasets included in the package, one univariate and one bivariate.
[ 0, 0, 0, 1, 0, 0 ]
Title: Ferrimagnetism in the Spin-1/2 Heisenberg Antiferromagnet on a Distorted Triangular Lattice, Abstract: The ground state of the spin-$1/2$ Heisenberg antiferromagnet on a distorted triangular lattice is studied using a numerical-diagonalization method. The network of interactions is the $\sqrt{3}\times\sqrt{3}$ type; the interactions are continuously controlled between the undistorted triangular lattice and the dice lattice. We find new states between the nonmagnetic 120-degree-structured state of the undistorted triangular case and the up-up-down state of the dice case. The intermediate states show spontaneous magnetizations that are smaller than one third of the saturated magntization corresponding to the up-up-down state.
[ 0, 1, 0, 0, 0, 0 ]
Title: Riemann-Hilbert problems for the resolved conifold, Abstract: We study the Riemann-Hilbert problems associated to the Donaldson-Thomas theory of the resolved conifold. We give explicit solutions in terms of the Barnes double and triple sine functions. We show that the corresponding tau function is a non-perturbative partition function, in the sense that its asymptotic expansion coincides with the topological string partition function.
[ 0, 0, 1, 0, 0, 0 ]
Title: On the Power of Over-parametrization in Neural Networks with Quadratic Activation, Abstract: We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a \emph{globally} optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.
[ 0, 0, 0, 1, 0, 0 ]
Title: Multi-Label Learning with Label Enhancement, Abstract: The task of multi-label learning is to predict a set of relevant labels for the unseen instance. Traditional multi-label learning algorithms treat each class label as a logical indicator of whether the corresponding label is relevant or irrelevant to the instance, i.e., +1 represents relevant to the instance and -1 represents irrelevant to the instance. Such label represented by -1 or +1 is called logical label. Logical label cannot reflect different label importance. However, for real-world multi-label learning problems, the importance of each possible label is generally different. For the real applications, it is difficult to obtain the label importance information directly. Thus we need a method to reconstruct the essential label importance from the logical multilabel data. To solve this problem, we assume that each multi-label instance is described by a vector of latent real-valued labels, which can reflect the importance of the corresponding labels. Such label is called numerical label. The process of reconstructing the numerical labels from the logical multi-label data via utilizing the logical label information and the topological structure in the feature space is called Label Enhancement. In this paper, we propose a novel multi-label learning framework called LEMLL, i.e., Label Enhanced Multi-Label Learning, which incorporates regression of the numerical labels and label enhancement into a unified framework. Extensive comparative studies validate that the performance of multi-label learning can be improved significantly with label enhancement and LEMLL can effectively reconstruct latent label importance information from logical multi-label data.
[ 1, 0, 0, 0, 0, 0 ]
Title: Deep Generative Learning via Variational Gradient Flow, Abstract: We propose a general framework to learn deep generative models via \textbf{V}ariational \textbf{Gr}adient Fl\textbf{ow} (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the $f$-divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including Kullback-Leibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered `logD' divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with state-of-the-art GANs.
[ 1, 0, 0, 1, 0, 0 ]
Title: Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games, Abstract: Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective communication protocol, we introduce a Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by experienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.
[ 1, 0, 0, 0, 0, 0 ]
Title: Multi-Evidence Filtering and Fusion for Multi-Label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning, Abstract: Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.
[ 0, 0, 0, 1, 0, 0 ]
Title: Three-dimensional color code thresholds via statistical-mechanical mapping, Abstract: Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code on the body-centererd cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D string-like and 2D sheet-like logical operators to be $p^{(1)}_\mathrm{3DCC} \simeq 1.9\%$ and $p^{(2)}_\mathrm{3DCC} \simeq 27.6\%$. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the 4- and 6-body random coupling Ising models.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fast and Accurate Sparse Coding of Visual Stimuli with a Simple, Ultra-Low-Energy Spiking Architecture, Abstract: Memristive crossbars have become a popular means for realizing unsupervised and supervised learning techniques. In previous neuromorphic architectures with leaky integrate-and-fire neurons, the crossbar itself has been separated from the neuron capacitors to preserve mathematical rigor. In this work, we sought to simplify the design, creating a fast circuit that consumed significantly lower power at a minimal cost of accuracy. We also showed that connecting the neurons directly to the crossbar resulted in a more efficient sparse coding architecture, and alleviated the need to pre-normalize receptive fields. This work provides derivations for the design of such a network, named the Simple Spiking Locally Competitive Algorithm, or SSLCA, as well as CMOS designs and results on the CIFAR and MNIST datasets. Compared to a non-spiking model which scored 33% on CIFAR-10 with a single-layer classifier, this hardware scored 32% accuracy. When used with a state-of-the-art deep learning classifier, the non-spiking model achieved 82% and our simplified, spiking model achieved 80%, while compressing the input data by 92%. Compared to a previously proposed spiking model, our proposed hardware consumed 99% less energy to do the same work at 21x the throughput. Accuracy held out with online learning to a write variance of 3%, suitable for the often-reported 4-bit resolution required for neuromorphic algorithms; with offline learning to a write variance of 27%; and with read variance to 40%. The proposed architecture's excellent accuracy, throughput, and significantly lower energy usage demonstrate the utility of our innovations.
[ 1, 0, 0, 0, 0, 0 ]
Title: Astronomy of Cholanaikkan tribe of Kerala, Abstract: Cholanaikkans are a diminishing tribe of India. With a population of less than 200 members, this tribe living in the reserved forests about 80 km from Kozhikode, it is one of the most isolated tribes. A programme of the Government of Kerala brings some of them to Kozhikode once a year. We studied various aspects of the tribe during such a visit in 2016. We report their science and technology.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Non-linear Approach to Space Dimension Perception by a Naive Agent, Abstract: Developmental Robotics offers a new approach to numerous AI features that are often taken as granted. Traditionally, perception is supposed to be an inherent capacity of the agent. Moreover, it largely relies on models built by the system's designer. A new approach is to consider perception as an experimentally acquired ability that is learned exclusively through the analysis of the agent's sensorimotor flow. Previous works, based on H.Poincaré's intuitions and the sensorimotor contingencies theory, allow a simulated agent to extract the dimension of geometrical space in which it is immersed without any a priori knowledge. Those results are limited to infinitesimal movement's amplitude of the system. In this paper, a non-linear dimension estimation method is proposed to push back this limitation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Foolbox: A Python toolbox to benchmark the robustness of machine learning models, Abstract: Even todays most advanced machine learning models are easily fooled by almost imperceptible perturbations of their inputs. Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models. It is build around the idea that the most comparable robustness measure is the minimum perturbation needed to craft an adversarial example. To this end, Foolbox provides reference implementations of most published adversarial attack methods alongside some new ones, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. Additionally, Foolbox interfaces with most popular deep learning frameworks such as PyTorch, Keras, TensorFlow, Theano and MXNet and allows different adversarial criteria such as targeted misclassification and top-k misclassification as well as different distance measures. The code is licensed under the MIT license and is openly available at this https URL . The most up-to-date documentation can be found at this http URL .
[ 1, 0, 0, 1, 0, 0 ]
Title: Two-dimensional boron on Pb (110) surface, Abstract: We simulate boron on Pb(110) surface by using ab initio evolutionary methodology. Interestingly, the two-dimensional (2D) Dirac Pmmn boron can be formed because of good lattice matching. Unexpectedly, by increasing the thickness of 2D boron, a three-bonded graphene-like structure (P2_1/c boron) was revealed to possess double anisotropic Dirac cones. It is 20 meV/atom lower in energy than the Pmmn structure, indicating the most stable 2D boron with particular Dirac cones. The puckered structure of P2_1/c boron results in the peculiar Dirac cones, as well as substantial mechanical anisotropy. The calculated Young's modulus is 320 GPa.nm along zigzag direction, which is comparable with graphene.
[ 0, 1, 0, 0, 0, 0 ]
Title: Unreasonable Effectivness of Deep Learning, Abstract: We show how well known rules of back propagation arise from a weighted combination of finite automata. By redefining a finite automata as a predictor we combine the set of all $k$-state finite automata using a weighted majority algorithm. This aggregated prediction algorithm can be simplified using symmetry, and we prove the equivalence of an algorithm that does this. We demonstrate that this algorithm is equivalent to a form of a back propagation acting in a completely connected $k$-node neural network. Thus the use of the weighted majority algorithm allows a bound on the general performance of deep learning approaches to prediction via known results from online statistics. The presented framework opens more detailed questions about network topology; it is a bridge to the well studied techniques of semigroup theory and applying these techniques to answer what specific network topologies are capable of predicting. This informs both the design of artificial networks and the exploration of neuroscience models.
[ 0, 0, 0, 1, 0, 0 ]
Title: Collaborative Summarization of Topic-Related Videos, Abstract: Large collections of videos are grouped into clusters by a topic keyword, such as Eiffel Tower or Surfing, with many important visual concepts repeating across them. Such a topically close set of videos have mutual influence on each other, which could be used to summarize one of them by exploiting information from others in the set. We build on this intuition to develop a novel approach to extract a summary that simultaneously captures both important particularities arising in the given video, as well as, generalities identified from the set of videos. The topic-related videos provide visual context to identify the important parts of the video being summarized. We achieve this by developing a collaborative sparse optimization method which can be efficiently solved by a half-quadratic minimization algorithm. Our work builds upon the idea of collaborative techniques from information retrieval and natural language processing, which typically use the attributes of other similar objects to predict the attribute of a given object. Experiments on two challenging and diverse datasets well demonstrate the efficacy of our approach over state-of-the-art methods.
[ 1, 0, 0, 0, 0, 0 ]
Title: ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information, Abstract: Requirements elicitation requires extensive knowledge and deep understanding of the problem domain where the final system will be situated. However, in many software development projects, analysts are required to elicit the requirements from an unfamiliar domain, which often causes communication barriers between analysts and stakeholders. In this paper, we propose a requirements ELICitation Aid tool (ELICA) to help analysts better understand the target application domain by dynamic extraction and labeling of requirements-relevant knowledge. To extract the relevant terms, we leverage the flexibility and power of Weighted Finite State Transducers (WFSTs) in dynamic modeling of natural language processing tasks. In addition to the information conveyed through text, ELICA captures and processes non-linguistic information about the intention of speakers such as their confidence level, analytical tone, and emotions. The extracted information is made available to the analysts as a set of labeled snippets with highlighted relevant terms which can also be exported as an artifact of the Requirements Engineering (RE) process. The application and usefulness of ELICA are demonstrated through a case study. This study shows how pre-existing relevant information about the application domain and the information captured during an elicitation meeting, such as the conversation and stakeholders' intentions, can be captured and used to support analysts achieving their tasks.
[ 1, 0, 0, 1, 0, 0 ]
Title: Crystal field excitations and magnons: their roles in oxyselenides Pr2O2M2OSe2 (M = Mn, Fe), Abstract: We present the results of neutron scattering experiments to study the crystal and magnetic structures of the Mott-insulating transition metal oxyselenides Pr2O2M2OSe2 (M = Mn, Fe). The structural role of the non-Kramers Pr3+ ion is investigated and analysis of Pr3+ crystal field excitations performed. Long-range order of Pr3+ moments in Pr2O2Fe2OSe2 can be induced by an applied magnetic field.
[ 0, 1, 0, 0, 0, 0 ]
Title: Analysis of Coupled Scalar Systems by Displacement Convexity, Abstract: Potential functionals have been introduced recently as an important tool for the analysis of coupled scalar systems (e.g. density evolution equations). In this contribution, we investigate interesting properties of this potential. Using the tool of displacement convexity, we show that, under mild assumptions on the system, the potential functional is displacement convex. Furthermore, we give the conditions on the system such that the potential is strictly displacement convex, in which case the minimizer is unique.
[ 1, 0, 1, 0, 0, 0 ]
Title: Deterministic subgraph detection in broadcast CONGEST, Abstract: We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant $k$, detecting $k$-paths and trees on $k$ nodes can be done in $O(1)$ rounds. -- For any constant $k$, detecting $k$-cycles and pseudotrees on $k$ nodes can be done in $O(n)$ rounds. -- On $d$-degenerate graphs, cliques and $4$-cycles can be enumerated in $O(d + \log n)$ rounds, and $5$-cycles in $O(d^2 + \log n)$ rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for $d$-degenerate graphs can be improved to optimal complexity $O(d/\log n)$ and $O(d^2/\log n)$, respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
[ 1, 0, 0, 0, 0, 0 ]
Title: On Graded Lie Algebras of Characteristic Three With Classical Reductive Null Component, Abstract: We consider finite-dimensional irreducible transitive graded Lie algebras $L = \sum_{i=-q}^rL_i$ over algebraically closed fields of characteristic three. We assume that the null component $L_0$ is classical and reductive. The adjoint representation of $L$ on itself induces a representation of the commutator subalgebra $L_0'$ of the null component on the minus-one component $L_{-1}.$ We show that if the depth $q$ of $L$ is greater than one, then this representation must be restricted.
[ 0, 0, 1, 0, 0, 0 ]
Title: Evolutionary sequences for hydrogen-deficient white dwarfs, Abstract: We present a set of full evolutionary sequences for white dwarfs with hydrogen-deficient atmospheres. We take into account the evolutionary history of the progenitor stars, all the relevant energy sources involved in the cooling, element diffusion in the very outer layers, and outer boundary conditions provided by new and detailed non-gray white dwarf model atmospheres for pure helium composition. These model atmospheres are based on the most up-to-date physical inputs. Our calculations extend down to very low effective temperatures, of $\sim 2\,500$~K, provide a homogeneous set of evolutionary cooling tracks that are appropriate for mass and age determinations of old hydrogen-deficient white dwarfs, and represent a clear improvement over previous efforts, which were computed using gray atmospheres.
[ 0, 1, 0, 0, 0, 0 ]
Title: On the uniqueness of complete biconservative surfaces in $\mathbb{R}^3$, Abstract: We study the uniqueness of complete biconservative surfaces in the Euclidean space $\mathbb{R}^3$, and prove that the only complete biconservative regular surfaces in $\mathbb{R}^3$ are either $CMC$ or certain surfaces of revolution. In particular, any compact biconservative regular surface in $\mathbb{R}^3$ is a round sphere.
[ 0, 0, 1, 0, 0, 0 ]
Title: Quantum Annealing Applied to De-Conflicting Optimal Trajectories for Air Traffic Management, Abstract: We present the mapping of a class of simplified air traffic management (ATM) problems (strategic conflict resolution) to quadratic unconstrained boolean optimization (QUBO) problems. The mapping is performed through an original representation of the conflict-resolution problem in terms of a conflict graph, where nodes of the graph represent flights and edges represent a potential conflict between flights. The representation allows a natural decomposition of a real world instance related to wind- optimal trajectories over the Atlantic ocean into smaller subproblems, that can be discretized and are amenable to be programmed in quantum annealers. In the study, we tested the new programming techniques and we benchmark the hardness of the instances using both classical solvers and the D-Wave 2X and D-Wave 2000Q quantum chip. The preliminary results show that for reasonable modeling choices the most challenging subproblems which are programmable in the current devices are solved to optimality with 99% of probability within a second of annealing time.
[ 1, 0, 0, 0, 0, 0 ]
Title: On rumour propagation among sceptics, Abstract: Junior, Machado and Zuluaga (2011) studied a model to understand the spread of a rumour. Their model consists of individuals situated at the integer points of the line $\N$. An individual at the origin $0$ starts a rumour and passes it to all individuals in the interval $[0,R_0]$, where $R_0$ is a non-negative random variable. An individual located at $i$ in this interval receives the rumour and transmits it further among individuals in $[i, i+R_i]$ where $R_0$ and $R_i$ are i.i.d. random variables. The rumour spreads in this manner. An alternate model considers individuals seeking to find the rumour from individuals who have already heard it. For this s/he asks individuals to the left of her/him and lying in an interval of a random size. We study these two models, when the individuals are more sceptical and they transmit or accept the rumour only if they receive it from at least two different sources. In stochastic geometry the equivalent of this rumour process is the study of coverage of the space $\N^d$ by random sets. Our study here extends the study of coverage of space and considers the case when each vertex of $\N^d$ is covered by at least two distinct random sets.
[ 0, 0, 1, 1, 0, 0 ]
Title: System Level Framework for Assessing the Accuracy of Neonatal EEG Acquisition, Abstract: Significant research has been conducted in recent years to design low-cost alternatives to the current EEG monitoring systems used in healthcare facilities. Testing such systems on a vulnerable population such as newborns is complicated due to ethical and regulatory considerations that slow down the technical development. This paper presents and validates a method for quantifying the accuracy of neonatal EEG acquisition systems and electrode technologies via clinical data simulations that do not require neonatal participants. The proposed method uses an extensive neonatal EEG database to simulate analogue signals, which are subsequently passed through electrical models of the skin-electrode interface, which are developed using wet and dry EEG electrode designs. The signal losses in the system are quantified at each stage of the acquisition process for electrode and acquisition board losses. SNR, correlation and noise values were calculated. The results verify that low-cost EEG acquisition systems are capable of obtaining clinical grade EEG. Although dry electrodes result in a significant increase in the skin-electrode impedance, accurate EEG recordings are still achievable.
[ 0, 0, 0, 1, 0, 0 ]
Title: Extremely fast simulations of heat transfer in fluidized beds, Abstract: Besides their huge technological importance, fluidized beds have attracted a large amount of research because they are perfect playgrounds to investigate highly dynamic particulate flows. Their over-all behavior is determined by short-lasting particle collisions and the interaction between solid and gas phase. Modern simulation techniques that combine computational fluid dynamics (CFD) and discrete element methods (DEM) are capable of describing their evolution and provide detailed information on what is happening on the particle scale. However, these approaches are limited by small time steps and large numerical costs, which inhibits the investigation of slower long-term processes like heat transfer or chemical conversion. In a recent study (Lichtenegger and Pirker, 2016), we have introduced recurrence CFD (rCFD) as a way to decouple fast from slow degrees of freedom in systems with recurring patterns: A conventional simulation is carried out to capture such coherent structures. Their re-appearance is characterized with recurrence plots that allow us to extrapolate their evolution far beyond the simulated time. On top of these predicted flow fields, any passive or weakly coupled process can then be investigated at fractions of the original computational costs. Here, we present the application of rCFD to heat transfer in a lab-scale fluidized bed. Initially hot particles are fluidized with cool air and their temperature evolution is recorded. In comparison to conventional CFD-DEM, we observe speed-up factors of about two orders of magnitude at very good accuracy with regard to recent measurements.
[ 0, 1, 0, 0, 0, 0 ]
Title: Exact Tensor Completion from Sparsely Corrupted Observations via Convex Optimization, Abstract: This paper conducts a rigorous analysis for provable estimation of multidimensional arrays, in particular third-order tensors, from a random subset of its corrupted entries. Our study rests heavily on a recently proposed tensor algebraic framework in which we can obtain tensor singular value decomposition (t-SVD) that is similar to the SVD for matrices, and define a new notion of tensor rank referred to as the tubal rank. We prove that by simply solving a convex program, which minimizes a weighted combination of tubal nuclear norm, a convex surrogate for the tubal rank, and the $\ell_1$-norm, one can recover an incoherent tensor exactly with overwhelming probability, provided that its tubal rank is not too large and that the corruptions are reasonably sparse. Interestingly, our result includes the recovery guarantees for the problems of tensor completion (TC) and tensor principal component analysis (TRPCA) under the same algebraic setup as special cases. An alternating direction method of multipliers (ADMM) algorithm is presented to solve this optimization problem. Numerical experiments verify our theory and real-world applications demonstrate the effectiveness of our algorithm.
[ 1, 0, 0, 1, 0, 0 ]
Title: On Triangle Inequality Based Approximation Error Estimation, Abstract: The distance between the true and numerical solutions in some metric is considered as the discretization error magnitude. If error magnitude ranging is known, the triangle inequality enables the estimation of the vicinity of the approximate solution that contains the exact one (exact solution enclosure). The analysis of distances between the numerical solutions enables discretization error ranging, if solutions errors are significantly different. Numerical tests conducted using the steady supersonic flows, governed by the two dimensional Euler equations, demonstrate the properties of the exact solution enclosure. The set of solutions generated by solvers of different orders of approximation is used. The success of this approach depends on the choice of metric.
[ 0, 1, 0, 0, 0, 0 ]
Title: Algorithmic Decision Making in the Presence of Unmeasured Confounding, Abstract: On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, making the decision to adopt them risky. In particular, one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. One standard strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible, Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. We show that this policy evaluation problem is a generalization of estimating heterogeneous treatment effects in observational studies, and so our methods can immediately be applied to that setting. Finally, we show, both theoretically and empirically, that under certain conditions it is possible to construct near-optimal algorithmic policies even when ignorability is violated. We demonstrate the efficacy of our methods on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.
[ 0, 0, 0, 1, 0, 0 ]
Title: Exception-Based Knowledge Updates, Abstract: Existing methods for dealing with knowledge updates differ greatly depending on the underlying knowledge representation formalism. When Classical Logic is used, updates are typically performed by manipulating the knowledge base on the model-theoretic level. On the opposite side of the spectrum stand the semantics for updating Answer-Set Programs that need to rely on rule syntax. Yet, a unifying perspective that could embrace both these branches of research is of great importance as it enables a deeper understanding of all involved methods and principles and creates room for their cross-fertilisation, ripening and further development. This paper bridges the seemingly irreconcilable approaches to updates. It introduces a novel monotonic characterisation of rules, dubbed RE-models, and shows it to be a more suitable semantic foundation for rule updates than SE-models. Then it proposes a generic scheme for specifying semantic rule update operators, based on the idea of viewing a program as the set of sets of RE-models of its rules; updates are performed by introducing additional interpretations - exceptions - to the sets of RE-models of rules in the original program. The introduced scheme is used to define rule update operators that are closely related to both classical update principles and traditional approaches to rules updates, and serve as a basis for a solution to the long-standing problem of state condensing, showing how they can be equivalently defined as binary operators on some class of logic programs. Finally, the essence of these ideas is extracted to define an abstract framework for exception-based update operators, viewing a knowledge base as the set of sets of models of its elements, which can capture a wide range of both model- and formula-based classical update operators, and thus serves as the first firm formal ground connecting classical and rule updates.
[ 1, 0, 0, 0, 0, 0 ]
Title: Dynamics of observables in rank-based models and performance of functionally generated portfolios, Abstract: In the seminal work [9], several macroscopic market observables have been introduced, in an attempt to find characteristics capturing the diversity of a financial market. Despite the crucial importance of such observables for investment decisions, a concise mathematical description of their dynamics has been missing. We fill this gap in the setting of rank-based models and expect our ideas to extend to other models of large financial markets as well. The results are then used to study the performance of multiplicatively and additively functionally generated portfolios, in particular, over short-term and medium-term horizons.
[ 0, 0, 0, 0, 0, 1 ]
Title: A 3pi Search for Planet Nine at 3.4 microns with WISE and NEOWISE, Abstract: The recent 'Planet Nine' hypothesis has led to many observational and archival searches for this giant planet proposed to orbit the Sun at hundreds of astronomical units. While trans-Neptunian object searches are typically conducted in the optical, models suggest Planet Nine could be self-luminous and potentially bright enough at ~3-5 microns to be detected by the Wide-field Infrared Survey Explorer (WISE). We have previously demonstrated a Planet Nine search methodology based on time-resolved WISE coadds, allowing us to detect moving objects much fainter than would be possible using single-frame extractions. In the present work, we extend our 3.4 micron (W1) search to cover more than three quarters of the sky and incorporate four years of WISE observations spanning a seven year time period. This represents the deepest and widest-area WISE search for Planet Nine to date. We characterize the spatial variation of our survey's sensitivity and rule out the presence of Planet Nine in the parameter space searched at W1 < 16.7 in high Galactic latitude regions (90% completeness).
[ 0, 1, 0, 0, 0, 0 ]
Title: Relative Chern character number and super-connection, Abstract: For two complex vector bundles admitting a homomorphism, whose singularity locates in the disjoint union of some odd--dimensional spheres, we give a formula to compute the relative Chern characteristic number of these two complex vector bundles. In particular, for a spin manifold admitting some sphere bundle structure, we give a formula to express the index of a special twisted Dirac operator.
[ 0, 0, 1, 0, 0, 0 ]
Title: Mental Sampling in Multimodal Representations, Abstract: Both resources in the natural environment and concepts in a semantic space are distributed "patchily", with large gaps in between the patches. To describe people's internal and external foraging behavior, various random walk models have been proposed. In particular, internal foraging has been modeled as sampling: in order to gather relevant information for making a decision, people draw samples from a mental representation using random-walk algorithms such as Markov chain Monte Carlo (MCMC). However, two common empirical observations argue against simple sampling algorithms such as MCMC. First, the spatial structure is often best described by a Lévy flight distribution: the probability of the distance between two successive locations follows a power-law on the distances. Second, the temporal structure of the sampling that humans and other animals produce have long-range, slowly decaying serial correlations characterized as $1/f$-like fluctuations. We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC$^3$), one of the first algorithms developed for sampling from multimodal distributions. MC$^3$ involves running multiple Markov chains in parallel but with target distributions of different temperatures, and it swaps the states of the chains whenever a better location is found. Heated chains more readily traverse valleys in the probability landscape to propose moves to far-away peaks, while the colder chains make the local steps that explore the current peak or patch. We show that MC$^3$ generates distances between successive samples that follow a Lévy flight distribution and $1/f$-like serial correlations, providing a single mechanistic account of these two puzzling empirical phenomena.
[ 1, 0, 0, 0, 0, 0 ]
Title: A new method to suppress the bias in polarized intensity, Abstract: Computing polarised intensities from noisy data in Stokes U and Q suffers from a positive bias that should be suppressed. To develop a correction method that, when applied to maps, should provide a distribution of polarised intensity that closely follows the signal from the source. We propose a new method to suppress the bias by estimating the polarisation angle of the source signal in a noisy environment with help of a modified median filter. We then determine the polarised intensity, including the noise, by projection of the observed values of Stokes U and Q onto the direction of this polarisation angle. We show that our new method represents the true signal very well. If the noise distribution in the maps of U and Q is Gaussian, then in the corrected map of polarised intensity it is also Gaussian. Smoothing to larger Gaussian beamsizes, to improve the signal-to-noise ratio, can be done directly with our method in the map of the polarised intensity. Our method also works in case of non-Gaussian noise distributions. The maps of the corrected polarised intensities and polarisation angles are reliable even in regions with weak signals and provide integrated flux densities and degrees of polarisation without the cumulative effect of the bias, which especially affects faint sources. Features at low intensity levels like 'depolarisation canals' are smoother than in the maps using the previous methods, which has broader implications, for example on the interpretation of interstellar turbulence.
[ 0, 1, 0, 0, 0, 0 ]
Title: Bootstrapping kernel intensity estimation for nonhomogeneous point processes depending on spatial covariates, Abstract: In the spatial point process context, kernel intensity estimation has been mainly restricted to exploratory analysis due to its lack of consistency. Different methods have been analysed to overcome this problem, and the inclusion of covariates resulted to be one possible solution. In this paper we focus on de\-fi\-ning a theoretical framework to derive a consistent kernel intensity estimator using covariates, as well as a consistent smooth bootstrap procedure. We define two new data-driven bandwidth selectors specifically designed for our estimator: a rule-of-thumb and a plug-in bandwidth based on our consistent bootstrap method. A simulation study is accomplished to understand the performance of our proposals in finite samples. Finally, we describe an application to a real data set consisting of the wildfires in Canada during June 2015, using meteorological information as covariates.
[ 0, 0, 0, 1, 0, 0 ]
Title: Network Representation Learning: A Survey, Abstract: With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.
[ 1, 0, 0, 1, 0, 0 ]
Title: Maximum Entropy Flow Networks, Abstract: Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge. In this paper, rather than the traditional method of optimizing over the continuous density directly, we learn a smooth and invertible transformation that maps a simple distribution to the desired maximum entropy distribution. Doing so is nontrivial in that the objective being maximized (entropy) is a function of the density itself. By exploiting recent developments in normalizing flow networks, we cast the maximum entropy problem into a finite-dimensional constrained optimization, and solve the problem by combining stochastic optimization with the augmented Lagrangian method. Simulation results demonstrate the effectiveness of our method, and applications to finance and computer vision show the flexibility and accuracy of using maximum entropy flow networks.
[ 0, 0, 0, 1, 0, 0 ]
Title: Visual Speech Language Models, Abstract: Language models (LM) are very powerful in lipreading systems. Language models built upon the ground truth utterances of datasets learn grammar and structure rules of words and sentences (the latter in the case of continuous speech). However, visual co-articulation effects in visual speech signals damage the performance of visual speech LM's as visually, people do not utter what the language model expects. These models are commonplace but while higher-order N-gram LM's may improve classification rates, the cost of this model is disproportionate to the common goal of developing more accurate classifiers. So we compare which unit would best optimize a lipreading (visual speech) LM to observe their limitations. We compare three units; visemes (visual speech units) \cite{lan2010improving}, phonemes (audible speech units), and words.
[ 1, 0, 0, 0, 0, 0 ]
Title: Millisecond Pulsars as Standards: Timing, positioning and communication, Abstract: Millisecond pulsars (MSPs) have a great potential to set standards in timekeeping, positioning and metadata communication.
[ 0, 1, 0, 0, 0, 0 ]
Title: Discerning Dark Energy Models with High-Redshift Standard Candles, Abstract: Following the success of type Ia supernovae in constraining cosmologies at lower redshift $(z\lesssim2)$, effort has been spent determining if a similarly useful standardisable candle can be found at higher redshift. {In this work we determine the largest possible magnitude discrepancy between a constant dark energy $\Lambda$CDM cosmology and a cosmology in which the equation of state $w(z)$ of dark energy is a function of redshift for high redshift standard candles $(z\gtrsim2)$}. We discuss a number of popular parametrisations of $w(z)$ with two free parameters, $w_z$CDM cosmologies, including the Chevallier-Polarski-Linder and generalisation thereof, $n$CPL, as well as the Jassal-Bagla-Padmanabhan parametrisation. For each of these parametrisations we calculate and find extrema of $\Delta \mu$, the difference between the distance modulus of a $w_z$CDM cosmology and a fiducial $\Lambda$CDM cosmology as a function of redshift, given 68\% likelihood constraints on the parameters $P=(\Omega_{m,0}, w_0, w_a)$. The parameters are constrained using cosmic microwave background, baryon acoustic oscillations, and type Ia supernovae data using CosmoMC. We find that none of the tested cosmologies can deviate more than 0.05 mag from the fiducial $\Lambda$CDM cosmology at high redshift, implying that high redshift standard candles will not aid in discerning between a $w_z$CDM cosmology and the fiducial $\Lambda$CDM cosmology. Conversely, this implies that if high redshift standard candles are found to be in disagreement with $\Lambda$CDM at high redshift, then this is a problem not only for $\Lambda$CDM but for the entire family of $w_z$CDM cosmologies.
[ 0, 1, 0, 0, 0, 0 ]
Title: On stochastic differential equations with arbitrarily slow convergence rates for strong approximation in two space dimensions, Abstract: In the recent article [Jentzen, A., Müller-Gronbach, T., and Yaroslavtseva, L., Commun. Math. Sci., 14(6), 1477--1500, 2016] it has been established that for every arbitrarily slow convergence speed and every natural number $d \in \{4,5,\ldots\}$ there exist $d$-dimensional stochastic differential equations (SDEs) with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. In this paper we strengthen the above result by proving that this slow convergence phenomena also arises in two ($d=2$) and three ($d=3$) space dimensions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Adaptive Estimation of Nonparametric Geometric Graphs, Abstract: This article studies the recovery of graphons when they are convolution kernels on compact (symmetric) metric spaces. This case is of particular interest since it covers the situation where the probability of an edge depends only on some unknown nonparametric function of the distance between latent points, referred to as Nonparametric Geometric Graphs (NGG). In this setting, almost minimax adaptive estimation of NGG is possible using a spectral procedure combined with a Goldenshluger-Lepski adaptation method. The latent spaces covered by our framework encompasses (among others) compact symmetric spaces of rank one, namely real spheres and projective spaces. For these latter, explicit computations of the eigenbasis and of the model complexity can be achieved, leading to quantitative non-asymptotic results. The time complexity of our method scales cubicly in the size of the graph and exponentially in the regularity of the graphon. Hence, this paper offers an algorithmically and theoretically efficient procedure to estimate smooth NGG. As a by product, this paper shows a non-asymptotic concentration result on the spectrum of integral operators defined by symmetric kernels (not necessarily positive).
[ 0, 0, 1, 1, 0, 0 ]
Title: Angular and Temporal Correlation of V2X Channels Across Sub-6 GHz and mmWave Bands, Abstract: 5G millimeter wave (mmWave) technology is envisioned to be an integral part of next-generation vehicle-to-everything (V2X) networks and autonomous vehicles due to its broad bandwidth, wide field of view sensing, and precise localization capabilities. The reliability of mmWave links may be compromised due to difficulties in beam alignment for mobile channels and due to blocking effects between a mmWave transmitter and a receiver. To address such challenges, out-of-band information from sub-6 GHz channels can be utilized for predicting the temporal and angular channel characteristics in mmWave bands, which necessitates a good understanding of how propagation characteristics are coupled across different bands. In this paper, we use ray tracing simulations to characterize the angular and temporal correlation across a wide range of propagation frequencies for V2X channels ranging from 900 MHz up to 73 GHz, for a vehicle maintaining line-of-sight (LOS) and non-LOS (NLOS) beams with a transmitter in an urban environment. Our results shed light on increasing sparsity behavior of propagation channels with increasing frequency and highlight the strong temporal/angular correlation among 5.9 GHz and 28 GHz bands especially for LOS channels.
[ 1, 0, 0, 0, 0, 0 ]
Title: Comparison of Decision Tree Based Classification Strategies to Detect External Chemical Stimuli from Raw and Filtered Plant Electrical Response, Abstract: Plants monitor their surrounding environment and control their physiological functions by producing an electrical response. We recorded electrical signals from different plants by exposing them to Sodium Chloride (NaCl), Ozone (O3) and Sulfuric Acid (H2SO4) under laboratory conditions. After applying pre-processing techniques such as filtering and drift removal, we extracted few statistical features from the acquired plant electrical signals. Using these features, combined with different classification algorithms, we used a decision tree based multi-class classification strategy to identify the three different external chemical stimuli. We here present our exploration to obtain the optimum set of ranked feature and classifier combination that can separate a particular chemical stimulus from the incoming stream of plant electrical signals. The paper also reports an exhaustive comparison of similar feature based classification using the filtered and the raw plant signals, containing the high frequency stochastic part and also the low frequency trends present in it, as two different cases for feature extraction. The work, presented in this paper opens up new possibilities for using plant electrical signals to monitor and detect other environmental stimuli apart from NaCl, O3 and H2SO4 in future.
[ 1, 1, 0, 1, 0, 0 ]
Title: Neutrino Fluxes from a Core-Collapse Supernova in a Model with Three Sterile Neutrinos, Abstract: The characteristics of the gravitational collapse of a supernova and the fluxes of active and sterile neutrinos produced during the formation of its protoneutron core have been calculated numerically. The relative yields of active and sterile neutrinos in core matter with different degrees of neutronization have been calculated for various input parameters and various initial conditions. A significant increase in the fraction of sterile neutrinos produced in superdense core matter at the resonant degree of neutronization has been confirmed. The contributions of sterile neutrinos to the collapse dynamics and the total flux of neutrinos produced during collapse have been shown to be relatively small. The total luminosity of sterile neutrinos is considerably lower than the luminosity of electron neutrinos, but their spectrum is considerably harder at high energies.
[ 0, 1, 0, 0, 0, 0 ]
Title: Modular operads and Batalin-Vilkovisky geometry, Abstract: This is a copy of the article published in IMRN (2007). I describe the noncommutative Batalin-Vilkovisky geometry associated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feynman transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affine P-manifolds. As an application I give a construction of characteristic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affine S[t]-manifolds, where S[t] is the twisted modular Det-operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras.
[ 0, 0, 1, 0, 0, 0 ]
Title: Regularized arrangements of cellular complexes, Abstract: In this paper we propose a novel algorithm to combine two or more cellular complexes, providing a minimal fragmentation of the cells of the resulting complex. We introduce here the idea of arrangement generated by a collection of cellular complexes, producing a cellular decomposition of the embedding space. The algorithm that executes this computation is called \emph{Merge} of complexes. The arrangements of line segments in 2D and polygons in 3D are special cases, as well as the combination of closed triangulated surfaces or meshed models. This algorithm has several important applications, including Boolean and other set operations over large geometric models, the extraction of solid models of biomedical structures at the cellular scale, the detailed geometric modeling of buildings, the combination of 3D meshes, and the repair of graphical models. The algorithm is efficiently implemented using the Linear Algebraic Representation (LAR) of argument complexes, i.e., on sparse representation of binary characteristic matrices of $d$-cell bases, well-suited for implementation in last generation accelerators and GPGPU applications.
[ 1, 0, 0, 0, 0, 0 ]
Title: Duality of deconfined quantum critical point in two dimensional Dirac semimetals, Abstract: In this paper we discuss the N$\acute{e}$el and Kekul$\acute{e}$ valence bond solids quantum criticality in graphene Dirac semimetal. Considering the quartic four-fermion interaction $g(\bar{\psi}_i\Gamma_{ij}\psi_j)^2$ that contains spin,valley, and sublattice degrees of freedom in the continuum field theory, we find the microscopic symmetry is spontaneously broken when the coupling $g$ is greater than a critical value $g_c$. The symmetry breaking gaps out the fermion and leads to semimetal-insulator transition. All possible quartic fermion-bilinear interactions give rise to the uniform critical coupling, which exhibits the multicritical point for various orders and the Landau-forbidden quantum critical point. We also investigate the typical critical point between N$\acute{e}$el and Kekul$\acute{e}$ valence bond solid transition when the symmetry is broken. The quantum criticality is captured by the Wess-Zumino-Witten term and there exist a mutual-duality for N$\acute{e}$el-Kekul$\acute{e}$ VBS order. We show the emergent spinon in the N$\acute{e}$el-Kekul$\acute{e}$ VBS transition , from which we conclude the phase transition is a deconfined quantum critical point. Additionally, the connection between the index theorem and zero energy mode bounded by the topological defect in the Kekul$\acute{e}$ VBS phase is studied to reveal the N$\acute{e}$el-Kekul$\acute{e}$ VBS duality.
[ 0, 1, 0, 0, 0, 0 ]
Title: An Empirical Bayes Approach to Regularization Using Previously Published Models, Abstract: This manuscript proposes a novel empirical Bayes technique for regularizing regression coefficients in predictive models. When predictions from a previously published model are available, this empirical Bayes method provides a natural mathematical framework for shrinking coefficients toward the estimates implied by the body of existing research rather than the shrinkage toward zero provided by traditional L1 and L2 penalization schemes. The method is applied to two different prediction problems. The first involves the construction of a model for predicting whether a single nucleotide polymorphism (SNP) of the KCNQ1 gene will result in dysfunction of the corresponding voltage gated ion channel. The second involves the prediction of preoperative serum creatinine change in patients undergoing cardiac surgery.
[ 0, 0, 0, 1, 0, 0 ]
Title: On minimum distance of locally repairable codes, Abstract: Distributed and cloud storage systems are used to reliably store large-scale data. Erasure codes have been recently proposed and used in real-world distributed and cloud storage systems such as Google File System, Microsoft Azure Storage, and Facebook HDFS-RAID, to enhance the reliability. In order to decrease the repair bandwidth and disk I/O, a class of erasure codes called locally repairable codes (LRCs) have been proposed which have small locality compare to other erasure codes. Although LRCs have small locality, they have lower minimum distance compare to the Singleton bound. Hence, seeking the largest possible minimum distance for LRCs have been the topic of many recent studies. In this paper, we study the largest possible minimum distance of a class of LRCs and evaluate them in terms of achievability. Furthermore, we compare our results with the existence bounds in the literature.
[ 1, 0, 0, 0, 0, 0 ]
Title: Towards a population synthesis model of self-gravitating disc fragmentation and tidal downsizing II: The effect of fragment-fragment interactions, Abstract: It is likely that most protostellar systems undergo a brief phase where the protostellar disc is self-gravitating. If these discs are prone to fragmentation, then they are able to rapidly form objects that are initially of several Jupiter masses and larger. The fate of these disc fragments (and the fate of planetary bodies formed afterwards via core accretion) depends sensitively not only on the fragment's interaction with the disc, but with its neighbouring fragments. We return to and revise our population synthesis model of self-gravitating disc fragmentation and tidal downsizing. Amongst other improvements, the model now directly incorporates fragment-fragment interactions while the disc is still present. We find that fragment-fragment scattering dominates the orbital evolution, even when we enforce rapid migration and inefficient gap formation. Compared to our previous model, we see a small increase in the number of terrestrial-type objects being formed, although their survival under tidal evolution is at best unclear. We also see evidence for disrupted fragments with evolved grain populations - this is circumstantial evidence for the formation of planetesimal belts, a phenomenon not seen in runs where fragment-fragment interactions are ignored. In spite of intense dynamical evolution, our population is dominated by massive giant planets and brown dwarfs at large semimajor axis, which direct imaging surveys should, but only rarely, detect. Finally, disc fragmentation is shown to be an efficient manufacturer of free floating planetary mass objects, and the typical multiplicity of systems formed via gravitational instability will be low.
[ 0, 1, 0, 0, 0, 0 ]
Title: Credit Risk Meets Random Matrices: Coping with Non-Stationary Asset Correlations, Abstract: We review recent progress in modeling credit risk for correlated assets. We start from the Merton model which default events and losses are derived from the asset values at maturity. To estimate the time development of the asset values, the stock prices are used whose correlations have a strong impact on the loss distribution, particularly on its tails. These correlations are non-stationary which also influences the tails. We account for the asset fluctuations by averaging over an ensemble of random matrices that models the truly existing set of measured correlation matrices. As a most welcome side effect, this approach drastically reduces the parameter dependence of the loss distribution, allowing us to obtain very explicit results which show quantitatively that the heavy tails prevail over diversification benefits even for small correlations. We calibrate our random matrix model with market data and show how it is capable of grasping different market situations. Furthermore, we present numerical simulations for concurrent portfolio risks, i.e., for the joint probability densities of losses for two portfolios. For the convenience of the reader, we give an introduction to the Wishart random matrix model.
[ 0, 0, 0, 0, 0, 1 ]
Title: Overcoming data scarcity with transfer learning, Abstract: Despite increasing focus on data publication and discovery in materials science and related fields, the global view of materials data is highly sparse. This sparsity encourages training models on the union of multiple datasets, but simple unions can prove problematic as (ostensibly) equivalent properties may be measured or computed differently depending on the data source. These hidden contextual differences introduce irreducible errors into analyses, fundamentally limiting their accuracy. Transfer learning, where information from one dataset is used to inform a model on another, can be an effective tool for bridging sparse data while preserving the contextual differences in the underlying measurements. Here, we describe and compare three techniques for transfer learning: multi-task, difference, and explicit latent variable architectures. We show that difference architectures are most accurate in the multi-fidelity case of mixed DFT and experimental band gaps, while multi-task most improves classification performance of color with band gaps. For activation energies of steps in NO reduction, the explicit latent variable method is not only the most accurate, but also enjoys cancellation of errors in functions that depend on multiple tasks. These results motivate the publication of high quality materials datasets that encode transferable information, independent of industrial or academic interest in the particular labels, and encourage further development and application of transfer learning methods to materials informatics problems.
[ 1, 0, 0, 1, 0, 0 ]