title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering
Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.
1
0
0
1
0
0
Relational Algebra for In-Database Process Mining
The execution logs that are used for process mining in practice are often obtained by querying an operational database and storing the result in a flat file. Consequently, the data processing power of the database system cannot be used anymore for this information, leading to constrained flexibility in the definition of mining patterns and limited execution performance in mining large logs. Enabling process mining directly on a database - instead of via intermediate storage in a flat file - therefore provides additional flexibility and efficiency. To help facilitate this ideal of in-database process mining, this paper formally defines a database operator that extracts the 'directly follows' relation from an operational database. This operator can both be used to do in-database process mining and to flexibly evaluate process mining related queries, such as: "which employee most frequently changes the 'amount' attribute of a case from one task to the next". We define the operator using the well-known relational algebra that forms the formal underpinning of relational databases. We formally prove equivalence properties of the operator that are useful for query optimization and present time-complexity properties of the operator. By doing so this paper formally defines the necessary relational algebraic elements of a 'directly follows' operator, which are required for implementation of such an operator in a DBMS.
1
0
0
0
0
0
Global existence for the nonlinear fractional Schrödinger equation with fractional dissipation
We consider the initial value problem for the fractional nonlinear Schrödinger equation with a fractional dissipation. Global existence and scattering are proved depending on the order of the fractional dissipation.
0
0
1
0
0
0
Statistical properties of an enstrophy conserving discretisation for the stochastic quasi-geostrophic equation
A framework of variational principles for stochastic fluid dynamics was presented by Holm (2015), and these stochastic equations were also derived by Cotter et al. (2017). We present a conforming finite element discretisation for the stochastic quasi-geostrophic equation that was derived from this framework. The discretisation preserves the first two moments of potential vorticity, i.e. the mean potential vorticity and the enstrophy. Following the work of Dubinkina and Frank (2007), who investigated the statistical mechanics of discretisations of the deterministic quasi-geostrophic equation, we investigate the statistical mechanics of our discretisation of the stochastic quasi-geostrophic equation. We compare the statistical properties of our discretisation with the Gibbs distribution under assumption of these conserved quantities, finding that there is agreement between the statistics under a wide range of set-ups.
0
1
0
0
0
0
Conditional Optimal Stopping: A Time-Inconsistent Optimization
Inspired by recent work of P.-L. Lions on conditional optimal control, we introduce a problem of optimal stopping under bounded rationality: the objective is the expected payoff at the time of stopping, conditioned on another event. For instance, an agent may care only about states where she is still alive at the time of stopping, or a company may condition on not being bankrupt. We observe that conditional optimization is time-inconsistent due to the dynamic change of the conditioning probability and develop an equilibrium approach in the spirit of R. H. Strotz' work for sophisticated agents in discrete time. Equilibria are found to be essentially unique in the case of a finite time horizon whereas an infinite horizon gives rise to non-uniqueness and other interesting phenomena. We also introduce a theory which generalizes the classical Snell envelope approach for optimal stopping by considering a pair of processes with Snell-type properties.
0
0
0
0
0
1
Principles for optimal cooperativity in allosteric materials
Allosteric proteins transmit a mechanical signal induced by binding a ligand. However, understanding the nature of the information transmitted and the architectures optimizing such transmission remains a challenge. Here we show using an {\it in-silico} evolution scheme and theoretical arguments that architectures optimized to be cooperative, which propagate efficiently energy, {qualitatively} differ from previously investigated materials optimized to propagate strain. Although we observe a large diversity of functioning cooperative architectures (including shear, hinge and twist designs), they all obey the same principle {of displaying a {\it mechanism}, i.e. an extended {soft} mode}. We show that its optimal frequency decreases with the spatial extension $L$ of the system as $L^{-d/2}$, where $d$ is the spatial dimension. For these optimal designs, cooperativity decays logarithmically with $L$ for $d=2$ and does not decay for $d=3$. Overall our approach leads to a natural explanation for several observations in allosteric proteins, and { indicates an experimental path to test if allosteric proteins lie close to optimality}.
0
1
0
0
0
0
Improved electronic structure and magnetic exchange interactions in transition metal oxides
We discuss the application of the Agapito Curtarolo and Buongiorno Nardelli (ACBN0) pseudo-hybrid Hubbard density functional to several transition metal oxides. ACBN0 is a fast, accurate and parameter-free alternative to traditional DFT+$U$ and hybrid exact exchange methods. In ACBN0, the Hubbard energy of DFT+$U$ is calculated via the direct evaluation of the local Coulomb and exchange integrals in which the screening of the bare Coulomb potential is accounted for by a renormalization of the density matrix. We demonstrate the success of the ACBN0 approach for the electronic properties of a series technologically relevant mono-oxides (MnO, CoO, NiO, FeO, both at equilibrium and under pressure). We also present results on two mixed valence compounds, Co$_3$O$_4$ and Mn$_3$O$_4$. Our results, obtained at the computational cost of a standard LDA/PBE calculation, are in excellent agreement with hybrid functionals, the GW approximation and experimental measurements.
0
1
0
0
0
0
Test of SensL SiPM coated with NOL-1 wavelength shifter in liquid xenon
A SensL MicroFC-SMT-60035 6x6 mm$^2$ silicon photo-multiplier coated with a NOL-1 wavelength shifter have been tested in the liquid xenon to detect the 175-nm scintillation light. For comparison, a Hamamatsu vacuum ultraviolet sensitive MPPC VUV3 3x3 mm$^2$ was tested under the same conditions. The photodetection efficiency of $13.1 \pm 2.5$% and $6.0 \pm 1.0$%, correspondingly, is obtained.
0
1
0
0
0
0
Neon2: Finding Local Minima via First-Order Oracles
We propose a reduction for non-convex optimization that can (1) turn an stationary-point finding algorithm into an local-minimum finding one, and (2) replace the Hessian-vector product computations with only gradient computations. It works both in the stochastic and the deterministic settings, without hurting the algorithm's performance. As applications, our reduction turns Natasha2 into a first-order method without hurting its performance. It also converts SGD, GD, SCSG, and SVRG into algorithms finding approximate local minima, outperforming some best known results.
1
0
0
1
0
0
Geometrical Insights for Implicit Generative Modeling
Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion. A careful look at the geometries induced by these distances on the space of probability measures reveals interesting differences. In particular, we can establish surprising approximate global convergence guarantees for the $1$-Wasserstein distance,even when the parametric generator has a nonconvex parametrization.
1
0
0
1
0
0
Simple Countermeasures to Mitigate the Effect of Pollution Attack in Network Coding Based Peer-to-Peer Live Streaming
Network coding based peer-to-peer streaming represents an effective solution to aggregate user capacities and to increase system throughput in live multimedia streaming. Nonetheless, such systems are vulnerable to pollution attacks where a handful of malicious peers can disrupt the communication by transmitting just a few bogus packets which are then recombined and relayed by unaware honest nodes, further spreading the pollution over the network. Whereas previous research focused on malicious nodes identification schemes and pollution-resilient coding, in this paper we show pollution countermeasures which make a standard network coding scheme resilient to pollution attacks. Thanks to a simple yet effective analytical model of a reference node collecting packets by malicious and honest neighbors, we demonstrate that i) packets received earlier are less likely to be polluted and ii) short generations increase the likelihood to recover a clean generation. Therefore, we propose a recombination scheme where nodes draw packets to be recombined according to their age in the input queue, paired with a decoding scheme able to detect the reception of polluted packets early in the decoding process and short generations. The effectiveness of our approach is experimentally evaluated in a real system we developed and deployed on hundreds to thousands peers. Experimental evidence shows that, thanks to our simple countermeasures, the effect of a pollution attack is almost canceled and the video quality experienced by the peers is comparable to pre-attack levels.
1
0
0
0
0
0
Small-scale structure and the Lyman-$α$ forest baryon acoustic oscillation feature
The baryon-acoustic oscillation (BAO) feature in the Lyman-$\alpha$ forest is one of the key probes of the cosmic expansion rate at redshifts z~2.5, well before dark energy is believed to have become dynamically significant. A key advantage of the BAO as a standard ruler is that it is a sharp feature and hence is more robust against broadband systematic effects than other cosmological probes. However, if the Lyman-$\alpha$ forest transmission is sensitive to the initial streaming velocity of the baryons relative to the dark matter, then the BAO peak position can be shifted. Here we investigate this sensitivity using a suite of hydrodynamic simulations of small regions of the intergalactic medium with a range of box sizes and physics assumptions; each simulation starts from initial conditions at the kinematic decoupling era (z~1059), undergoes a discrete change from neutral gas to ionized gas thermal evolution at reionization (z~8), and is finally processed into a Lyman-$\alpha$ forest transmitted flux cube. Streaming velocities suppress small-scale structure, leading to less violent relaxation after reionization. The changes in the gas distribution and temperature-density relation at low redshift are more subtle, due to the convergent temperature evolution in the ionized phase. The change in the BAO scale is estimated to be of the order of 0.12% at z=2.5; some of the major uncertainties and avenues for future improvement are discussed. The predicted streaming velocity shift would be a subdominant but not negligible effect (of order $0.26\sigma$) for the upcoming DESI Lyman-$\alpha$ forest survey, and exceeds the cosmic variance floor. It is hoped that this study will motivate additional theoretical work on the magnitude of the BAO shift, both in the Lyman-$\alpha$ forest and in other tracers of large-scale structure.
0
1
0
0
0
0
Scale-dependent perturbations finally detectable by future galaxy surveys and their contribution to cosmological model selection
By means of the present geometrical and dynamical observational data, it is very hard to establish, from a statistical perspective, a clear preference among the vast majority of the proposed models for the dynamical dark energy and/or modified gravity theories alternative with respect to the $\Lambda$CDM scenario. On the other hand, on scales much smaller than present Hubble scale, there are possibly detectable differences in the growth of the matter perturbations for different modes of the perturbations, even in the context of the $\Lambda$CDM model. Here, we analyze the evolution of the dark matter perturbations in the context of $\Lambda$CDM and some dynamical dark energy models involving future cosmological singularities, such as the sudden future singularity and the finite scale factor singularity. We employ the Newtonian gauge formulation for the derivation of the perturbation equations for the growth function, and we abandon both the sub-Hubble approximation and the slowly varying potential assumption. We apply the Fisher Matrix approach to three future planned galaxy surveys e.g., DESI, Euclid, and WFirst-2.4. With the mentioned surveys on hand, only with the dynamical probes, we will achieve multiple goals: $1.$ the improvement in the accuracy of the determination of the $f\sigma_{8}$ will give the possibility to discriminate between the $\Lambda$CDM and the alternative dark energy models even in the scale-independent approach; $2.$ it will be possible to test the goodness of the scale-independence finally, and also to quantify the necessity of a scale dependent approach to the growth of the perturbations, in particular using surveys which encompass redshift bins with scales $k<0.005\,h$ Mpc$^{-1}$; $3.$ the scale-dependence itself might add much more discriminating power in general, but further advanced surveys will be needed.
0
1
0
0
0
0
InfoCatVAE: Representation Learning with Categorical Variational Autoencoders
This paper describes InfoCatVAE, an extension of the variational autoencoder that enables unsupervised disentangled representation learning. InfoCatVAE uses multimodal distributions for the prior and the inference network and then maximizes the evidence lower bound objective (ELBO). We connect the new ELBO derived for our model with a natural soft clustering objective which explains the robustness of our approach. We then adapt the InfoGANs method to our setting in order to maximize the mutual information between the categorical code and the generated inputs and obtain an improved model.
0
0
0
1
0
0
Quadratic twists of abelian varieties and disparity in Selmer ranks
We study the parity of 2-Selmer ranks in the family of quadratic twists of a fixed principally polarised abelian variety over a number field. Specifically, we determine the proportion of twists having odd (resp. even) 2-Selmer rank. This generalises work of Klagsbrun--Mazur--Rubin for elliptic curves and Yu for Jacobians of hyperelliptic curves. Several differences in the statistics arise due to the possibility that the Shafarevich--Tate group (if finite) may have order twice a square. In particular, the statistics for parities of 2-Selmer ranks and 2-infinity Selmer ranks need no longer agree and we describe both.
0
0
1
0
0
0
From acquaintance to best friend forever: robust and fine-grained inference of social tie strengths
Social networks often provide only a binary perspective on social ties: two individuals are either connected or not. While sometimes external information can be used to infer the strength of social ties, access to such information may be restricted or impractical. Sintos and Tsaparas (KDD 2014) first suggested to infer the strength of social ties from the topology of the network alone, by leveraging the Strong Triadic Closure (STC) property. The STC property states that if person A has strong social ties with persons B and C, B and C must be connected to each other as well (whether with a weak or strong tie). Sintos and Tsaparas exploited this to formulate the inference of the strength of social ties as NP-hard optimization problem, and proposed two approximation algorithms. We refine and improve upon this landmark paper, by developing a sequence of linear relaxations of this problem that can be solved exactly in polynomial time. Usefully, these relaxations infer more fine-grained levels of tie strength (beyond strong and weak), which also allows to avoid making arbitrary strong/weak strength assignments when the network topology provides inconclusive evidence. One of the relaxations simultaneously infers the presence of a limited number of STC violations. An extensive theoretical analysis leads to two efficient algorithmic approaches. Finally, our experimental results elucidate the strengths of the proposed approach, and sheds new light on the validity of the STC property in practice.
1
0
0
0
0
0
Conditional bias robust estimation of the total of curve data by sampling in a finite population: an illustration on electricity load curves
For marketing or power grid management purposes, many studies based on the analysis of the total electricity consumption curves of groups of customers are now carried out by electricity companies. Aggregated total or mean load curves are estimated using individual curves measured at fine time grid and collected according to some sampling design. Due to the skewness of the distribution of electricity consumptions, these samples often contain outlying curves which may have an important impact on the usual estimation procedures. We introduce several robust estimators of the total consumption curve which are not sensitive to such outlying curves. These estimators are based on the conditional bias approach and robust functional methods. We also derive mean square error estimators of these robust estimators and finally, we evaluate and compare the performance of the suggested estimators on Irish electricity data.
0
0
0
1
0
0
Ulrich bundles on smooth projective varieties of minimal degree
We classify the Ulrich vector bundles of arbitrary rank on smooth projective varieties of minimal degree. In the process, we prove the stability of the sheaves of relative differentials on rational scrolls.
0
0
1
0
0
0
$k$-shellable simplicial complexes and graphs
In this paper we show that a $k$-shellable simplicial complex is the expansion of a shellable complex. We prove that the face ring of a pure $k$-shellable simplicial complex satisfies the Stanley conjecture. In this way, by applying expansion functor to the face ring of a given pure shellable complex, we construct a large class of rings satisfying the Stanley conjecture. Also, by presenting some characterizations of $k$-shellable graphs, we extend some results due to Castrillón-Cruz, Cruz-Estrada and Van Tuyl-Villareal.
0
0
1
0
0
0
The Effect of Phasor Measurement Units on the Accuracy of the Network Estimated Variables
The most commonly used weighted least square state estimator in power industry is nonlinear and formulated by using conventional measurements such as line flow and injection measurements. PMUs (Phasor Measurement Units) are gradually adding them to improve the state estimation process. In this paper the way of corporation the PMU data to the conventional measurements and a linear formulation of the state estimation using only PMU measured data are investigated. Six cases are tested while gradually increasing the number of PMUs which are added to the measurement set and the effect of PMUs on the accuracy of variables are illustrated and compared by applying them on IEEE 14, 30 test systems.
1
0
1
0
0
0
$ε$-Regularity and Structure of 4-dimensional Shrinking Ricci Solitons
A closed four dimensional manifold cannot possess a non-flat Ricci soliton metric with arbitrarily small $L^2$-norm of the curvature. In this paper, we localize this fact in the case of shrinking Ricci solitons by proving an $\varepsilon$-regularity theorem, thus confirming a conjecture of Cheeger-Tian. As applications, we will also derive structural results concerning the degeneration of the metrics on a family of complete non-compact four dimensional shrinking Ricci solitons without a uniform entropy lower bound. In the appendix, we provide a detailed account of the equivariant good chopping theorem when collapsing with locally bounded curvature happens.
0
0
1
0
0
0
Cosmological model discrimination with Deep Learning
We demonstrate the potential of Deep Learning methods for measurements of cosmological parameters from density fields, focusing on the extraction of non-Gaussian information. We consider weak lensing mass maps as our dataset. We aim for our method to be able to distinguish between five models, which were chosen to lie along the $\sigma_8$ - $\Omega_m$ degeneracy, and have nearly the same two-point statistics. We design and implement a Deep Convolutional Neural Network (DCNN) which learns the relation between five cosmological models and the mass maps they generate. We develop a new training strategy which ensures the good performance of the network for high levels of noise. We compare the performance of this approach to commonly used non-Gaussian statistics, namely the skewness and kurtosis of the convergence maps. We find that our implementation of DCNN outperforms the skewness and kurtosis statistics, especially for high noise levels. The network maintains the mean discrimination efficiency greater than $85\%$ even for noise levels corresponding to ground based lensing observations, while the other statistics perform worse in this setting, achieving efficiency less than $70\%$. This demonstrates the ability of CNN-based methods to efficiently break the $\sigma_8$ - $\Omega_m$ degeneracy with weak lensing mass maps alone. We discuss the potential of this method to be applied to the analysis of real weak lensing data and other datasets.
0
1
0
1
0
0
Deep Memory Networks for Attitude Identification
We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral. Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models.
1
0
0
0
0
0
Discrete flow posteriors for variational inference in discrete dynamical systems
Each training step for a variational autoencoder (VAE) requires us to sample from the approximate posterior, so we usually choose simple (e.g. factorised) approximate posteriors in which sampling is an efficient computation that fully exploits GPU parallelism. However, such simple approximate posteriors are often insufficient, as they eliminate statistical dependencies in the posterior. While it is possible to use normalizing flow approximate posteriors for continuous latents, some problems have discrete latents and strong statistical dependencies. The most natural approach to model these dependencies is an autoregressive distribution, but sampling from such distributions is inherently sequential and thus slow. We develop a fast, parallel sampling procedure for autoregressive distributions based on fixed-point iterations which enables efficient and accurate variational inference in discrete state-space latent variable dynamical systems. To optimize the variational bound, we considered two ways to evaluate probabilities: inserting the relaxed samples directly into the pmf for the discrete distribution, or converting to continuous logistic latent variables and interpreting the K-step fixed-point iterations as a normalizing flow. We found that converting to continuous latent variables gave considerable additional scope for mismatch between the true and approximate posteriors, which resulted in biased inferences, we thus used the former approach. Using our fast sampling procedure, we were able to realize the benefits of correlated posteriors, including accurate uncertainty estimates for one cell, and accurate connectivity estimates for multiple cells, in an order of magnitude less time.
0
0
0
1
1
0
Audio Super Resolution using Neural Networks
We introduce a new audio processing technique that increases the sampling rate of signals such as speech or music using deep convolutional neural networks. Our model is trained on pairs of low and high-quality audio examples; at test-time, it predicts missing samples within a low-resolution signal in an interpolation process similar to image super-resolution. Our method is simple and does not involve specialized audio processing techniques; in our experiments, it outperforms baselines on standard speech and music benchmarks at upscaling ratios of 2x, 4x, and 6x. The method has practical applications in telephony, compression, and text-to-speech generation; it demonstrates the effectiveness of feed-forward convolutional architectures on an audio generation task.
1
0
0
0
0
0
Thermoelectric power factor enhancement by spin-polarized currents - a nanowire case study
Thermoelectric (TE) measurements have been performed on the workhorses of today's data storage devices, exhibiting either the giant or the anisotropic magnetoresistance effect (GMR and AMR). The temperature-dependent (50-300 K) and magnetic field-dependent (up to 1 T) TE power factor (PF) has been determined for several Co-Ni alloy nanowires with varying Co:Ni ratios as well as for Co-Ni/Cu multilayered nanowires with various Cu layer thicknesses, which were all synthesized via a template-assisted electrodeposition process. A systematic investigation of the resistivity, as well as the Seebeck coefficient, is performed for Co-Ni alloy nanowires and Co-Ni/Cu multilayered nanowires. At room temperature, measured values of TE PFs up to 3.6 mWK-2m-1 for AMR samples and 2.0 mWK-2m-1 for GMR nanowires are obtained. Furthermore, the TE PF is found to increase by up to 13.1 % for AMR Co-Ni alloy nanowires and by up to 52 % for GMR Co-Ni/Cu samples in an external applied magnetic field. The magnetic nanowires exhibit TE PFs that are of the same order of magnitude as TE PFs of Bi-Sb-Se-Te based thermoelectric materials and, additionally, give the opportunity to adjust the TE power output to changing loads and hotspots through external magnetic fields.
0
1
0
0
0
0
Risk-Sensitive Cooperative Games for Human-Machine Systems
Autonomous systems can substantially enhance a human's efficiency and effectiveness in complex environments. Machines, however, are often unable to observe the preferences of the humans that they serve. Despite the fact that the human's and machine's objectives are aligned, asymmetric information, along with heterogeneous sensitivities to risk by the human and machine, make their joint optimization process a game with strategic interactions. We propose a framework based on risk-sensitive dynamic games; the human seeks to optimize her risk-sensitive criterion according to her true preferences, while the machine seeks to adaptively learn the human's preferences and at the same time provide a good service to the human. We develop a class of performance measures for the proposed framework based on the concept of regret. We then evaluate their dependence on the risk-sensitivity and the degree of uncertainty. We present applications of our framework to self-driving taxis, and robo-financial advising.
1
0
0
1
0
0
A natural framework for isogeometric fluid-structure interaction based on BEM-shell coupling
The interaction between thin structures and incompressible Newtonian fluids is ubiquitous both in nature and in industrial applications. In this paper we present an isogeometric formulation of such problems which exploits a boundary integral formulation of Stokes equations to model the surrounding flow, and a non linear Kirchhoff-Love shell theory to model the elastic behaviour of the structure. We propose three different coupling strategies: a monolithic, fully implicit coupling, a staggered, elasticity driven coupling, and a novel semi-implicit coupling, where the effect of the surrounding flow is incorporated in the non-linear terms of the solid solver through its damping characteristics. The novel semi-implicit approach is then used to demonstrate the power and robustness of our method, which fits ideally in the isogeometric paradigm, by exploiting only the boundary representation (B-Rep) of the thin structure middle surface.
0
1
1
0
0
0
Inertial Effects on the Stress Generation of Active Fluids
Suspensions of self-propelled bodies generate a unique mechanical stress owing to their motility that impacts their large-scale collective behavior. For microswimmers suspended in a fluid with negligible particle inertia, we have shown that the virial `swim stress' is a useful quantity to understand the rheology and nonequilibrium behaviors of active soft matter systems. For larger self-propelled organisms like fish, it is unclear how particle inertia impacts their stress generation and collective movement. Here, we analyze the effects of finite particle inertia on the mechanical pressure (or stress) generated by a suspension of self-propelled bodies. We find that swimmers of all scales generate a unique `swim stress' and `Reynolds stress' that impacts their collective motion. We discover that particle inertia plays a similar role as confinement in overdamped active Brownian systems, where the reduced run length of the swimmers decreases the swim stress and affects the phase behavior. Although the swim and Reynolds stresses vary individually with the magnitude of particle inertia, the sum of the two contributions is independent of particle inertia. This points to an important concept when computing stresses in computer simulations of nonequilibrium systems---the Reynolds and the virial stresses must both be calculated to obtain the overall stress generated by a system.
0
1
0
0
0
0
On Gauge Invariance and Covariant Derivatives in Metric Spaces
In this manuscript, we will discuss the construction of covariant derivative operator in quantum gravity. We will find it is appropriate to use affine connections more general than metric compatible connections in quantum gravity. We will demonstrate this using the canonical quantization procedure. This is valid irrespective of the presence and nature of sources. The standard Palatini formalism, where metric and affine connections are the independent variables, is not sufficient to construct a source-free theory of gravity with affine connections more general than the metric compatible Levi-Civita connections. This is also valid for minimally coupled interacting theories where sources only couple with metric by using the metric compatible Levi-Civita connections exclusively. We will discuss a potential formalism and possible extensions of the action to introduce nonmetricity in these cases. This is also required to construct a general interacting quantum theory with dynamical general affine connections. We will have to use a modified Ricci tensor to state Einstein's equation in the Palatini formalism. General affine connections can be described by a third rank tensor with one contravariant and two covariant indices. Antisymmetric part of this tensor in the lower indices gives torsion with a half factor. In the Palatini formalism or its generalizations considered here, symmetric part of this tensor in the lower indices is finite when torsion is finite. This part can give a massless scalar field in a potential formalism. We will have to extend the local conservation laws when we use general affine connections. General affine connections can become significant to solve cosmological problems.
0
1
0
0
0
0
A Compressed Sensing Approach for Distribution Matching
In this work, we formulate the fixed-length distribution matching as a Bayesian inference problem. Our proposed solution is inspired from the compressed sensing paradigm and the sparse superposition (SS) codes. First, we introduce sparsity in the binary source via position modulation (PM). We then present a simple and exact matcher based on Gaussian signal quantization. At the receiver, the dematcher exploits the sparsity in the source and performs low-complexity dematching based on generalized approximate message-passing (GAMP). We show that GAMP dematcher and spatial coupling lead to asymptotically optimal performance, in the sense that the rate tends to the entropy of the target distribution with vanishing reconstruction error in a proper limit. Furthermore, we assess the performance of the dematcher on practical Hadamard-based operators. A remarkable feature of our proposed solution is the possibility to: i) perform matching at the symbol level (nonbinary); ii) perform joint channel coding and matching.
0
0
0
1
0
0
A simple descriptor and predictor for the stable structures of two-dimensional surface alloys
Predicting the ground state of alloy systems is challenging due to the large number of possible configurations. We identify an easily computed descriptor for the stability of binary surface alloys, the effective coordination number $\mathscr{E}$. We show that $\mathscr{E}(M)$ correlates well with the enthalpy of mixing, from density functional theory (DFT) calculations on $M_x$Au$_{1-x}$/Ru [$M$ = Mn or Fe]. At each $x$, the most favored structure has the highest [lowest] value of $\mathscr{E}(M)$ if the system is non-magnetic [ferromagnetic]. Importantly, little accuracy is lost upon replacing $\mathscr{E}(M)$ by $\mathscr{E}^*(M)$, which can be quickly computed without performing a DFT calculation, possibly offering a simple alternative to the frequently used cluster expansion method.
0
1
0
0
0
0
Fractional integrals and Fourier transforms
This paper gives a short survey of some basic results related to estimates of fractional integrals and Fourier transforms. It is closely adjoint to our previous survey papers \cite{K1998} and \cite{K2007}. The main methods used in the paper are based on nonincreasing rearrangements. We give alternative proofs of some results. We observe also that the paper represents the mini-course given by the author at Barcelona University in October, 2014.
0
0
1
0
0
0
Multi-Level Network Embedding with Boosted Low-Rank Matrix Approximation
As opposed to manual feature engineering which is tedious and difficult to scale, network representation learning has attracted a surge of research interests as it automates the process of feature learning on graphs. The learned low-dimensional node vector representation is generalizable and eases the knowledge discovery process on graphs by enabling various off-the-shelf machine learning tools to be directly applied. Recent research has shown that the past decade of network embedding approaches either explicitly factorize a carefully designed matrix to obtain the low-dimensional node vector representation or are closely related to implicit matrix factorization, with the fundamental assumption that the factorized node connectivity matrix is low-rank. Nonetheless, the global low-rank assumption does not necessarily hold especially when the factorized matrix encodes complex node interactions, and the resultant single low-rank embedding matrix is insufficient to capture all the observed connectivity patterns. In this regard, we propose a novel multi-level network embedding framework BoostNE, which can learn multiple network embedding representations of different granularity from coarse to fine without imposing the prevalent global low-rank assumption. The proposed BoostNE method is also in line with the successful gradient boosting method in ensemble learning as multiple weak embeddings lead to a stronger and more effective one. We assess the effectiveness of the proposed BoostNE framework by comparing it with existing state-of-the-art network embedding methods on various datasets, and the experimental results corroborate the superiority of the proposed BoostNE network embedding framework.
1
0
0
1
0
0
Deviation from the dipole-ice model in the new spinel spin-ice candidate, MgEr$_2$Se$_4$
In spin ice research, small variations in structure or interactions drive a multitude of different behaviors, yet the collection of known materials relies heavily on the `227' pyrochlore structure. Here, we present thermodynamic, structural and inelastic neutron scattering data on a new spin-ice material, MgEr$_2$Se$_4$, which contributes to the relatively underexplored family of rare-earth spinel chalcogenides. X-ray and neutron diffraction confirm a normal spinel structure, and places Er$^{3+}$ moments on an ideal pyrochlore sublattice. Measurement of crystal electric field excitations with neutron inelastic scattering confirms that the moments have perfect Ising character, and further identifies the ground state Kramers doublet as having dipole-octupolar form with a significant multipolar character. Heat capacity and magnetic neutron diffuse scattering have ice-like features, but are inconsistent with Monte Carlo simulations of the nearest-neighbor and next-nearest-neighbor dipolar spin-ice (DSI) models. A significant remnant entropy is observed as T$\rightarrow$0 K, but again falls short of the full Pauling expectation for DSI, unless significant disorder is added. We show that these observations are fully in-line with what is recently reported for CdEr$_2$Se$_4$, and point to the importance of quantum fluctuations in these materials.
0
1
0
0
0
0
Generating Nontrivial Melodies for Music as a Service
We present a hybrid neural network and rule-based system that generates pop music. Music produced by pure rule-based systems often sounds mechanical. Music produced by machine learning sounds better, but still lacks hierarchical temporal structure. We restore temporal hierarchy by augmenting machine learning with a temporal production grammar, which generates the music's overall structure and chord progressions. A compatible melody is then generated by a conditional variational recurrent autoencoder. The autoencoder is trained with eight-measure segments from a corpus of 10,000 MIDI files, each of which has had its melody track and chord progressions identified heuristically. The autoencoder maps melody into a multi-dimensional feature space, conditioned by the underlying chord progression. A melody is then generated by feeding a random sample from that space to the autoencoder's decoder, along with the chord progression generated by the grammar. The autoencoder can make musically plausible variations on an existing melody, suitable for recurring motifs. It can also reharmonize a melody to a new chord progression, keeping the rhythm and contour. The generated music compares favorably with that generated by other academic and commercial software designed for the music-as-a-service industry.
1
0
0
0
0
0
Vision and Challenges for Knowledge Centric Networking (KCN)
In the creation of a smart future information society, Internet of Things (IoT) and Content Centric Networking (CCN) break two key barriers for both the front-end sensing and back-end networking. However, we still observe the missing piece of the research that dominates the current networking traffic control and system management, e.g., lacking of the knowledge penetrated into both sensing and networking to glue them holistically. In this paper, we envision to leverage emerging machine learning or deep learning techniques to create aspects of knowledge for facilitating the designs. In particular, we can extract knowledge from collected data to facilitate reduced data volume, enhanced system intelligence and interactivity, improved service quality, communication with better controllability and lower cost. We name such a knowledge-oriented traffic control and networking management paradigm as the Knowledge Centric Networking (KCN). This paper presents KCN rationale, KCN benefits, related works and research opportunities.
1
0
0
0
0
0
Extracting Geometry from Quantum Spacetime: Obstacles down the road
Any acceptable quantum gravity theory must allow us to recover the classical spacetime in the appropriate limit. Moreover, the spacetime geometrical notions should be intrinsically tied to the behavior of the matter that probes them. We consider some difficulties that would be confronted in attempting such an enterprise. The problems we uncover seem to go beyond the technical level to the point of questioning the overall feasibility of the project. The main issue is related to the fact that, in the quantum theory, it is impossible to assign a trajectory to a physical object, and, on the other hand, according to the basic tenets of the geometrization of gravity, it is precisely the trajectories of free localized objects that define the spacetime geometry. The insights gained in this analysis should be relevant to those interested in the quest for a quantum theory of gravity and might help refocus some of its goals.
0
1
0
0
0
0
Data-Driven Estimation of Travel Latency Cost Functions via Inverse Optimization in Multi-Class Transportation Networks
We develop a method to estimate from data travel latency cost functions in multi-class transportation networks, which accommodate different types of vehicles with very different characteristics (e.g., cars and trucks). Leveraging our earlier work on inverse variational inequalities, we develop a data-driven approach to estimate the travel latency cost functions. Extensive numerical experiments using benchmark networks, ranging from moderate-sized to large-sized, demonstrate the effectiveness and efficiency of our approach.
1
0
1
0
0
0
Autoencoder Based Sample Selection for Self-Taught Learning
Self-taught learning is a technique that uses a large number of unlabeled data as source samples to improve the task performance on target samples. Compared with other transfer learning techniques, self-taught learning can be applied to a broader set of scenarios due to the loose restrictions on source data. However, knowledge transferred from source samples that are not sufficiently related to the target domain may negatively influence the target learner, which is referred to as negative transfer. In this paper, we propose a metric for the relevance between a source sample and target samples. To be more specific, both source and target samples are reconstructed through a single-layer autoencoder with a linear relationship between source samples and target samples simultaneously enforced. An l_{2,1}-norm sparsity constraint is imposed on the transformation matrix to identify source samples relevant to the target domain. Source domain samples that are deemed relevant are assigned pseudo-labels reflecting their relevance to target domain samples, and are combined with target samples in order to provide an expanded training set for classifier training. Local data structures are also preserved during source sample selection through spectral graph analysis. Promising results in extensive experiments show the advantages of the proposed approach.
0
0
0
1
0
0
Guiding Chemical Synthesis: Computational Prediction of the Regioselectivity of CH Functionalization
We will develop a computational method (RegioSQM) for predicting the regioselectivity of CH functionalization reactions that can be used by synthetic chemists who are not experts in computational chemistry through a simple web interface (regiosqm.org). CH functionalization, i.e. replacing the hydrogen atom in a CH bond with another atom or molecule, is arguably single most promising technique for increasing the efficiency of chemical synthesis, but there are no generally applicable predictive tools that predict which CH bond is most reactive. RegioSQM uses semiempirical quantum chemistry methods to predict relative stabilities of reaction intermediates which correlates with reaction rate and our goal is to determine which quantum method and intermediate give the best result for each reaction type. Finally, we will experimentally demonstrate how RegioSQM can be used to make the chemical synthesis part of drug discovery more efficient.
0
1
0
0
0
0
Potential-Function Proofs for First-Order Methods
This note discusses proofs for convergence of first-order methods based on simple potential-function arguments. We cover methods like gradient descent (for both smooth and non-smooth settings), mirror descent, and some accelerated variants.
1
0
0
0
0
0
Multidimensional $p$-adic continued fraction algorithms
We give a new class of multidimensional $p$-adic continued fraction algorithms. We propose an algorithm in the class for which we can expect that multidimensional $p$-adic version of Lagrange's Theorem holds.
0
0
1
0
0
0
Shutting down or powering up a (U)LIRG? Merger components in distinctly different evolutionary states in IRAS 19115-2124 (The Bird)
We present new SINFONI near-infrared integral field unit (IFU) spectroscopy and SALT optical long-slit spectroscopy characterising the history of a nearby merging luminous infrared galaxy, dubbed the Bird (IRAS19115-2114). The NIR line-ratio maps of the IFU data-cubes and stellar population fitting of the SALT spectra now allow dating of the star formation (SF) over the triple system uncovered from our previous adaptive optics data. The distinct components separate very clearly in a line-ratio diagnostic diagram. An off-nuclear pure starburst dominates the current SF of the Bird with 60-70% of the total, with a 4-7 Myr age, and signs of a fairly constant long-term star formation of the underlying stellar population. The most massive nucleus, in contrast, is quenched with a starburst age of >40 Myr and shows hints of budding AGN activity. The secondary massive nucleus is at an intermediate stage. The two major components have a population of older stars, consistent with a starburst triggered 1 Gyr ago in a first encounter. The simplest explanation of the history is that of a triple merger, where the strongly star forming component has joined later. We detect multiple gas flows in different phases. The Bird offers an opportunity to witness multiple stages of galaxy evolution in the same system; triggering as well as quenching of SF, and the early appearance of AGN activity. It also serves as a cautionary note on interpretations of observations with lower spatial resolution and/or without infrared data. At high-redshift the system would look like a clumpy starburst with crucial pieces of its puzzle hidden, in danger of misinterpretations.
0
1
0
0
0
0
Asymptotics to all orders of the Hurwitz zeta function
We present several formulae for the large-$t$ asymptotics of the modified Hurwitz zeta function $\zeta_1(x,s),x>0,s=\sigma+it,0<\sigma\leq1,t>0,$ which are valid to all orders. In the case of $x=0$, these formulae reduce to the asymptotic expressions recently obtained for the Riemann zeta function, which include the classical results of Siegel as a particular case.
0
0
1
0
0
0
Distributed Stochastic Approximation with Local Projections
We propose a distributed version of a stochastic approximation scheme constrained to remain in the intersection of a finite family of convex sets. The projection to the intersection of these sets is also computed in a distributed manner and a `nonlinear gossip' mechanism is employed to blend the projection iterations with the stochastic approximation using multiple time scales
1
0
0
0
0
0
Expected Policy Gradients
We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates across the action when estimating the gradient, instead of relying only on the action in the sampled trajectory. We establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. We also prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and, for the Gaussian case, with no computational overhead. Finally, we show that it is optimal in a certain sense to explore with a Gaussian policy such that the covariance is proportional to the exponential of the scaled Hessian of the critic with respect to the actions. We present empirical results confirming that this new form of exploration substantially outperforms DPG with the Ornstein-Uhlenbeck heuristic in four challenging MuJoCo domains.
1
0
0
1
0
0
A new Hysteretic Nonlinear Energy Sink (HNES)
The behavior of a new Hysteretic Nonlinear Energy Sink (HNES) coupled to a linear primary oscillator is investigated in shock mitigation. Apart from a small mass and a nonlinear elastic spring of the Duffing oscillator, the HNES is also comprised of a purely hysteretic and a linear elastic spring of potentially negative stiffness, connected in parallel. The Bouc-Wen model is used to describe the force produced by both the purely hysteretic and linear elastic springs. Coupling the primary oscillator with the HNES three nonlinear equations of motion are derived, in terms of the two displacements and the dimensionless hysteretic variable, which are integrated numerically using the analog equation method. The performance of the HNES is examined by quantifying the percentage of the initially induced energy in the primary system that is passively transferred and dissipated by the HNES. Remarkable results are achieved for a wide range of initial input energies. The great performance of the HNES is mostly evidenced when the linear spring stiffness takes on negative values.
0
1
0
0
0
0
Ultra-Low Noise Amplifier Design for Magnetic Resonance Imaging systems
This paper demonstrates designing and developing of an Ultra-Low Noise Amplifier which should potentially increase the sensitivity of the existing Magnetic Resonance Imaging (MRI) systems. The Design of the LNA is fabricated and characterized including matching and input high power protection circuits. The estimate improvement of SNR of the LNA in comparison to room temperature operation is taken here. The Cascode amplifier topology is chosen to be investigated for high performance Low Noise amplifier design and for the fabrication. The fabricated PCB layout of the Cascode LNA is tested by using measurement instruments Spectrum Analyser and Vector Network analyzer. The measurements of fabricated PCB layout of the Cascode LNA at room temperature had the following performance, the operation frequency is 32 MHz, the noise figure is 0.45 dB at source impedance 50 {\Omega}, the gain is 11.6 dB, the output return loss is 21.1 dB, and the input return loss 0.12 dB and it is unconditionally stable for up to 6 GHz band. The goal of the research is achieved where the Cascode LNA had improvement of SNR.
0
1
0
0
0
0
Virtual Astronaut for Scientific Visualization - A Prototype for Santa Maria Crater on Mars
To support scientific visualization of multiple-mission data from Mars, the Virtual Astronaut (VA) creates an interactive virtual 3D environment built on the Unity3D Game Engine. A prototype study was conducted based on orbital and Opportunity Rover data covering Santa Maria Crater in Meridiani Planum on Mars. The VA at Santa Maria provides dynamic visual representations of the imaging, compositional, and mineralogical information. The VA lets one navigate through the scene and provides geomorphic and geologic contexts for the rover operations. User interactions include in-situ observations visualization, feature measurement, and an animation control of rover drives. This paper covers our approach and implementation of the VA system. A brief summary of the prototype system functions and user feedback is also covered. Based on external review and comments by the science community, the prototype at Santa Maria has proven the VA to be an effective tool for virtual geovisual analysis.
1
1
0
0
0
0
Self-Supervised Generalisation with Meta Auxiliary Learning
Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manually-labelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network's performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at \url{this https URL}.
1
0
0
1
0
0
Measuring High-Energy Spectra with HAWC
The High-Altitude Water-Cherenkov (HAWC) experiment is a TeV $\gamma$-ray observatory located \unit[4100]{m} above sea level on the Sierra Negra mountain in Puebla, Mexico. The detector consists of 300 water-filled tanks, each instrumented with 4 photomultiplier tubes that utilize the water-Cherenkov technique to detect atmospheric air showers produced by cosmic $\gamma$ rays. Construction of HAWC was completed in March of 2015. The experiment's wide instantaneous field of view (\unit[2]{sr}) and high duty cycle (> 95\%) make it a powerful survey instrument sensitive to pulsars, supernova remnants, and other $\gamma$-ray sources. The mechanisms of particle acceleration at these sources can be studied by analyzing their high-energy spectra. To this end, we have developed an event-by-event energy-reconstruction algorithm using an artificial neural network to estimate energies of primary $\gamma$ rays at HAWC. We will present the details of this technique and its performance as well as the current progress toward using it to measure energy spectra of $\gamma$-ray sources.
0
1
0
0
0
0
A Study on Arbitrarily Varying Channels with Causal Side Information at the Encoder
In this work, we study two models of arbitrarily varying channels, when causal side information is available at the encoder in a causal manner. First, we study the arbitrarily varying channel (AVC) with input and state constraints, when the encoder has state information in a causal manner. Lower and upper bounds on the random code capacity are developed. A lower bound on the deterministic code capacity is established in the case of a message-averaged input constraint. In the setting where a state constraint is imposed on the jammer, while the user is under no constraints, the random code bounds coincide, and the random code capacity is determined. Furthermore, for this scenario, a generalized non-symmetrizability condition is stated, under which the deterministic code capacity coincides with the random code capacity. A second model considered in our work is the arbitrarily varying degraded broadcast channel with causal side information at the encoder (without constraints). We establish inner and outer bounds on both the random code capacity region and the deterministic code capacity region. The capacity region is then determined for a class of channels satisfying a condition on the mutual informations between the strategy variables and the channel outputs. As an example, we show that the condition holds for the arbitrarily varying binary symmetric broadcast channel, and we find the corresponding capacity region.
1
0
1
0
0
0
On the Three Properties of Stationary Populations and knotting with Non-Stationary Populations
We propose three properties that are related to the stationary population identity (SPI) of population biology by connecting it with stationary populations and non-stationary populations which are approaching stationarity. These properties provide deeper insights into cohort formation in real-world populations and the length of the duration for which stationary and non-stationary conditions hold. The new concepts are based on the time gap between the occurrence of stationary and non-stationary populations within the SPI framework that we refer to as Oscillatory SPI and the Amplitude of SPI.
0
0
0
0
1
0
Generating and designing DNA with deep generative models
We propose generative neural network methods to generate DNA sequences and tune them to have desired properties. We present three approaches: creating synthetic DNA sequences using a generative adversarial network; a DNA-based variant of the activation maximization ("deep dream") design method; and a joint procedure which combines these two approaches together. We show that these tools capture important structures of the data and, when applied to designing probes for protein binding microarrays, allow us to generate new sequences whose properties are estimated to be superior to those found in the training data. We believe that these results open the door for applying deep generative models to advance genomics research.
1
0
0
1
0
0
Radon background in liquid xenon detectors
The radioactive daughters isotope of 222Rn are one of the highest risk contaminants in liquid xenon detectors aiming for a small signal rate. The noble gas is permanently emanated from the detector surfaces and mixed with the xenon target. Because of its long half-life 222Rn is homogeneously distributed in the target and its subsequent decays can mimic signal events. Since no shielding is possible this background source can be the dominant one in future large scale experiments. This article provides an overview of strategies used to mitigate this source of background by means of material selection and on-line radon removal techniques.
0
1
0
0
0
0
Minimax Regret Bounds for Reinforcement Learning
We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of $\tilde{O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})$ where $H$ is the time horizon, $S$ the number of states, $A$ the number of actions and $T$ the number of time-steps. This result improves over the best previous known bound $\tilde{O}(HS \sqrt{AT})$ achieved by the UCRL2 algorithm of Jaksch et al., 2010. The key significance of our new results is that when $T\geq H^3S^3A$ and $SA\geq H$, it leads to a regret of $\tilde{O}(\sqrt{HSAT})$ that matches the established lower bound of $\Omega(\sqrt{HSAT})$ up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in $S$), and we define Bernstein-based "exploration bonuses" that use the empirical variance of the estimated values at the next states (to improve scaling in $H$).
1
0
0
1
0
0
Asymptotic Theory for the Maximum of an Increasing Sequence of Parametric Functions
\cite{HillMotegi2017} present a new general asymptotic theory for the maximum of a random array $\{\mathcal{X}_{n}(i)$ $:$ $1$ $\leq $ $i$ $\leq $ $\mathcal{L}\}_{n\geq 1}$, where each $\mathcal{X}_{n}(i)$ is assumed to converge in probability as $n$ $\rightarrow $ $\infty $. The array dimension $\mathcal{L}$ is allowed to increase with the sample size $n$. Existing extreme value theory arguments focus on observed data $\mathcal{X}_{n}(i)$, and require a well defined limit law for $\max_{1\leq i\leq \mathcal{L}}|\mathcal{X}_{n}(i)|$ by restricting dependence across $i$. The high dimensional central limit theory literature presumes approximability by a Gaussian law, and also restricts attention to observed data. \cite{HillMotegi2017} do not require $\max_{1\leq i\leq \mathcal{L}_{n}}|\mathcal{X}_{n}(i)|$ to have a well defined limit nor be approximable by a Gaussian random variable, and we do not make any assumptions about dependence across $i$. We apply the theory to filtered data when the variable of interest $\mathcal{X}_{n}(i,\theta _{0})$ is not observed, but its sample counterpart $\mathcal{X}_{n}(i,\hat{\theta}_{n})$ is observed where $\hat{\theta}_{n}$ estimates $\theta _{0}$. The main results are illustrated by looking at unit root tests for a high dimensional random variable, and a residuals white noise test.
0
0
1
1
0
0
Resilient Active Information Gathering with Mobile Robots
Applications of safety, security, and rescue in robotics, such as multi-robot target tracking, involve the execution of information acquisition tasks by teams of mobile robots. However, in failure-prone or adversarial environments, robots get attacked, their communication channels get jammed, and their sensors may fail, resulting in the withdrawal of robots from the collective task, and consequently the inability of the remaining active robots to coordinate with each other. As a result, traditional design paradigms become insufficient and, in contrast, resilient designs against system-wide failures and attacks become important. In general, resilient design problems are hard, and even though they often involve objective functions that are monotone or submodular, scalable approximation algorithms for their solution have been hitherto unknown. In this paper, we provide the first algorithm, enabling the following capabilities: minimal communication, i.e., the algorithm is executed by the robots based only on minimal communication between them; system-wide resiliency, i.e., the algorithm is valid for any number of denial-of-service attacks and failures; and provable approximation performance, i.e., the algorithm ensures for all monotone (and not necessarily submodular) objective functions a solution that is finitely close to the optimal. We quantify our algorithm's approximation performance using a notion of curvature for monotone set functions. We support our theoretical analyses with simulated and real-world experiments, by considering an active information gathering scenario, namely, multi-robot target tracking.
1
0
0
1
0
0
Optical properties of a four-layer waveguiding nanocomposite structure in near-IR regime
The theoretical study of the optical properties of TE- and TM- modes in a four-layer structure composed of the magneto-optical yttrium iron garnet guiding layer on a dielectric substrate covered by planar nanocomposite guiding multilayer is presented. The dispersion equation is obtained taking into account the bigyrotropic properties of yttrium-iron garnet, and an original algorithm for the guided modes identification is proposed. The dispersion spectra are analyzed and the energy flux distributions across the structure are calculated. The fourfold difference between the partial power fluxes within the waveguide layers is achieved in the wavelength range of 200 nm.
0
1
0
0
0
0
High Dimensional Structured Superposition Models
High dimensional superposition models characterize observations using parameters which can be written as a sum of multiple component parameters, each with its own structure, e.g., sum of low rank and sparse matrices, sum of sparse and rotated sparse vectors, etc. In this paper, we consider general superposition models which allow sum of any number of component parameters, and each component structure can be characterized by any norm. We present a simple estimator for such models, give a geometric condition under which the components can be accurately estimated, characterize sample complexity of the estimator, and give high probability non-asymptotic bounds on the componentwise estimation error. We use tools from empirical processes and generic chaining for the statistical analysis, and our results, which substantially generalize prior work on superposition models, are in terms of Gaussian widths of suitable sets.
1
0
0
1
0
0
Source Forager: A Search Engine for Similar Source Code
Developers spend a significant amount of time searching for code: e.g., to understand how to complete, correct, or adapt their own code for a new context. Unfortunately, the state of the art in code search has not evolved much beyond text search over tokenized source. Code has much richer structure and semantics than normal text, and this property can be exploited to specialize the code-search process for better querying, searching, and ranking of code-search results. We present a new code-search engine named Source Forager. Given a query in the form of a C/C++ function, Source Forager searches a pre-populated code database for similar C/C++ functions. Source Forager preprocesses the database to extract a variety of simple code features that capture different aspects of code. A search returns the $k$ functions in the database that are most similar to the query, based on the various extracted code features. We tested the usefulness of Source Forager using a variety of code-search queries from two domains. Our experiments show that the ranked results returned by Source Forager are accurate, and that query-relevant functions can be reliably retrieved even when searching through a large code database that contains very few query-relevant functions. We believe that Source Forager is a first step towards much-needed tools that provide a better code-search experience.
1
0
0
0
0
0
Crossmatching variable objects with the Gaia data
Tens of millions of new variable objects are expected to be identified in over a billion time series from the Gaia mission. Crossmatching known variable sources with those from Gaia is crucial to incorporate current knowledge, understand how these objects appear in the Gaia data, train supervised classifiers to recognise known classes, and validate the results of the Variability Processing and Analysis Coordination Unit (CU7) within the Gaia Data Analysis and Processing Consortium (DPAC). The method employed by CU7 to crossmatch variables for the first Gaia data release includes a binary classifier to take into account positional uncertainties, proper motion, targeted variability signals, and artefacts present in the early calibration of the Gaia data. Crossmatching with a classifier makes it possible to automate all those decisions which are typically made during visual inspection. The classifier can be trained with objects characterized by a variety of attributes to ensure similarity in multiple dimensions (astrometry, photometry, time-series features), with no need for a-priori transformations to compare different photometric bands, or of predictive models of the motion of objects to compare positions. Other advantages as well as some disadvantages of the method are discussed. Implementation steps from the training to the assessment of the crossmatch classifier and selection of results are described.
0
1
0
0
0
0
A New Test of Multivariate Nonlinear Causality
The multivariate nonlinear Granger causality developed by Bai et al. (2010) plays an important role in detecting the dynamic interrelationships between two groups of variables. Following the idea of Hiemstra-Jones (HJ) test proposed by Hiemstra and Jones (1994), they attempt to establish a central limit theorem (CLT) of their test statistic by applying the asymptotical property of multivariate $U$-statistic. However, Bai et al. (2016) revisit the HJ test and find that the test statistic given by HJ is NOT a function of $U$-statistics which implies that the CLT neither proposed by Hiemstra and Jones (1994) nor the one extended by Bai et al. (2010) is valid for statistical inference. In this paper, we re-estimate the probabilities and reestablish the CLT of the new test statistic. Numerical simulation shows that our new estimates are consistent and our new test performs decent size and power.
0
0
0
1
0
0
Nonlinear dynamics of polar regions in paraelectric phase of (Ba1-x,Srx)TiO3 ceramics
The dynamic dielectric nonlinearity of barium strontium titanate (Ba1-x,Srx)TiO3 ceramics is investigated in their paraelectric phase. With the goal to contribute to the identification of the mechanisms that govern the dielectric nonlinearity in this family, we analyze the amplitude and the phase angles of the first and the third harmonics of polarization. Our study shows that an interpretation of the field-dependent polarization in paraelectric (Ba1-x,Srx)TiO3 ceramics in terms of the Rayleigh-type dynamics is inadequate for our samples and that their nonlinear response rather resembles that observed in canonical relaxor Pb(Mg1/3Nb2/3)O3.
0
1
0
0
0
0
Nonlinear Modal Decoupling Based Power System Transient Stability Analysis
Nonlinear modal decoupling (NMD) was recently proposed to nonlinearly transform a multi-oscillator system into a number of decoupled oscillators which together behave the same as the original system in an extended neighborhood of the equilibrium. Each oscillator has just one degree of freedom and hence can easily be analyzed to infer the stability of the original system associated with one electromechanical mode. As the first attempt of applying the NMD methodology to realistic power system models, this paper proposes an NMD-based transient stability analysis approach. For a multi-machine power system, the approach first derives decoupled nonlinear oscillators by a coordinates transformation, and then applies Lyapunov stability analysis to oscillators to assess the stability of the original system. Nonlinear modal interaction is also considered. The approach can be efficiently applied to a large-scale power grid by conducting NMD regarding only selected modes. Case studies on a 3-machine 9-bus system and an NPCC 48-machine 140-bus system show the potentials of the approach in transient stability analysis for multi-machine systems.
1
0
0
0
0
0
KELT-18b: Puffy Planet, Hot Host, Probably Perturbed
We report the discovery of KELT-18b, a transiting hot Jupiter in a 2.87d orbit around the bright (V=10.1), hot, F4V star BD+60 1538 (TYC 3865-1173-1). We present follow-up photometry, spectroscopy, and adaptive optics imaging that allow a detailed characterization of the system. Our preferred model fits yield a host stellar temperature of 6670+/-120 K and a mass of 1.524+/-0.069 Msun, situating it as one of only a handful of known transiting planets with hosts that are as hot, massive, and bright. The planet has a mass of 1.18+/-0.11 Mjup, a radius of 1.57+/-0.04 Rjup, and a density of 0.377+/-0.040 g/cm^3, making it one of the most inflated planets known around a hot star. We argue that KELT-18b's high temperature and low surface gravity, which yield an estimated ~600 km atmospheric scale height, combined with its hot, bright host make it an excellent candidate for observations aimed at atmospheric characterization. We also present evidence for a bound stellar companion at a projected separation of ~1100 AU, and speculate that it may have contributed to the strong misalignment we suspect between KELT-18's spin axis and its planet's orbital axis. The inferior conjunction time is 2457542.524998 +/-0.000416 (BJD_TDB) and the orbital period is 2.8717510 +/- 0.0000029 days. We encourage Rossiter-McLaughlin measurements in the near future to confirm the suspected spin-orbit misalignment of this system.
0
1
0
0
0
0
BAMBI: An R package for Fitting Bivariate Angular Mixture Models
Statistical analyses of directional or angular data have applications in a variety of fields, such as geology, meteorology and bioinformatics. There is substantial literature on descriptive and inferential techniques for univariate angular data, with the bivariate (or more generally, multivariate) cases receiving more attention in recent years. However, there is a lack of software implementing inferential techniques in practice, especially in the bivariate situation. In this article, we introduce *BAMBI*, an R package for analyzing bivariate (and univariate) angular data. We implement random generation, density evaluation, and computation of summary measures for three bivariate (viz., bivariate wrapped normal, von Mises sine and von Mises cosine) and two univariate (viz., univariate wrapped normal and von Mises) angular distributions. The major contribution of BAMBI to statistical computing is in providing Bayesian methods for modeling angular data using finite mixtures of these distributions. We also provide functions for visual and numerical diagnostics and Bayesian inference for the fitted models. In this article, we first provide a brief review of the distributions and techniques used in BAMBI, then describe the capabilities of the package, and finally conclude with demonstrations of mixture model fitting using BAMBI on the two real datasets included in the package, one univariate and one bivariate.
0
0
0
1
0
0
Finite Sample Analysis of Two-Timescale Stochastic Approximation with Applications to Reinforcement Learning
Two-timescale Stochastic Approximation (SA) algorithms are widely used in Reinforcement Learning (RL). Their iterates have two parts that are updated using distinct stepsizes. In this work, we develop a novel recipe for their finite sample analysis. Using this, we provide a concentration bound, which is the first such result for a two-timescale SA. The type of bound we obtain is known as `lock-in probability'. We also introduce a new projection scheme, in which the time between successive projections increases exponentially. This scheme allows one to elegantly transform a lock-in probability into a convergence rate result for projected two-timescale SA. From this latter result, we then extract key insights on stepsize selection. As an application, we finally obtain convergence rates for the projected two-timescale RL algorithms GTD(0), GTD2, and TDC.
1
0
0
0
0
0
Existence and uniqueness of solutions to Y-systems and TBA equations
We consider Y-system functional equations of the form $$ Y_n(x+i)Y_n(x-i)=\prod_{m=1}^N (1+Y_m(x))^{G_{nm}}$$ and the corresponding nonlinear integral equations of the Thermodynamic Bethe Ansatz. We prove an existence and uniqueness result for solutions of these equations, subject to appropriate conditions on the analytical properties of the $Y_n$, in particular the absence of zeros in a strip around the real axis. The matrix $G_{nm}$ must have non-negative real entries, and be irreducible and diagonalisable over $\mathbb{R}$ with spectral radius less than 2. This includes the adjacency matrices of finite Dynkin diagrams, but covers much more as we do not require $G_{nm}$ to be integers. Our results specialise to the constant Y-system, proving existence and uniqueness of a strictly positive solution in that case.
0
1
0
0
0
0
Normalization of Neural Networks using Analytic Variance Propagation
We address the problem of estimating statistics of hidden units in a neural network using a method of analytic moment propagation. These statistics are useful for approximate whitening of the inputs in front of saturating non-linearities such as a sigmoid function. This is important for initialization of training and for reducing the accumulated scale and bias dependencies (compensating covariate shift), which presumably eases the learning. In batch normalization, which is currently a very widely applied technique, sample estimates of statistics of hidden units over a batch are used. The proposed estimation uses an analytic propagation of mean and variance of the training set through the network. The result depends on the network structure and its current weights but not on the specific batch input. The estimates are suitable for initialization and normalization, efficient to compute and independent of the batch size. The experimental verification well supports these claims. However, the method does not share the generalization properties of BN, to which our experiments give some additional insight.
0
0
0
1
0
0
Ferrimagnetism in the Spin-1/2 Heisenberg Antiferromagnet on a Distorted Triangular Lattice
The ground state of the spin-$1/2$ Heisenberg antiferromagnet on a distorted triangular lattice is studied using a numerical-diagonalization method. The network of interactions is the $\sqrt{3}\times\sqrt{3}$ type; the interactions are continuously controlled between the undistorted triangular lattice and the dice lattice. We find new states between the nonmagnetic 120-degree-structured state of the undistorted triangular case and the up-up-down state of the dice case. The intermediate states show spontaneous magnetizations that are smaller than one third of the saturated magntization corresponding to the up-up-down state.
0
1
0
0
0
0
Delta sets for symmetric numerical semigroups with embedding dimension three
This work extends the results known for the Delta sets of non-symmetric numerical semigroups with embedding dimension three to the symmetric case. Thus, we have a fast algorithm to compute the Delta set of any embedding dimension three numerical semigroup. Also, as a consequence of these resutls, the sets that can be realized as Delta sets of numerical semigroups of embedding dimension three are fully characterized.
0
0
1
0
0
0
Riemann-Hilbert problems for the resolved conifold
We study the Riemann-Hilbert problems associated to the Donaldson-Thomas theory of the resolved conifold. We give explicit solutions in terms of the Barnes double and triple sine functions. We show that the corresponding tau function is a non-perturbative partition function, in the sense that its asymptotic expansion coincides with the topological string partition function.
0
0
1
0
0
0
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a \emph{globally} optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.
0
0
0
1
0
0
Multi-Label Learning with Label Enhancement
The task of multi-label learning is to predict a set of relevant labels for the unseen instance. Traditional multi-label learning algorithms treat each class label as a logical indicator of whether the corresponding label is relevant or irrelevant to the instance, i.e., +1 represents relevant to the instance and -1 represents irrelevant to the instance. Such label represented by -1 or +1 is called logical label. Logical label cannot reflect different label importance. However, for real-world multi-label learning problems, the importance of each possible label is generally different. For the real applications, it is difficult to obtain the label importance information directly. Thus we need a method to reconstruct the essential label importance from the logical multilabel data. To solve this problem, we assume that each multi-label instance is described by a vector of latent real-valued labels, which can reflect the importance of the corresponding labels. Such label is called numerical label. The process of reconstructing the numerical labels from the logical multi-label data via utilizing the logical label information and the topological structure in the feature space is called Label Enhancement. In this paper, we propose a novel multi-label learning framework called LEMLL, i.e., Label Enhanced Multi-Label Learning, which incorporates regression of the numerical labels and label enhancement into a unified framework. Extensive comparative studies validate that the performance of multi-label learning can be improved significantly with label enhancement and LEMLL can effectively reconstruct latent label importance information from logical multi-label data.
1
0
0
0
0
0
Unsure When to Stop? Ask Your Semantic Neighbors
In iterative supervised learning algorithms it is common to reach a point in the search where no further induction seems to be possible with the available data. If the search is continued beyond this point, the risk of overfitting increases significantly. Following the recent developments in inductive semantic stochastic methods, this paper studies the feasibility of using information gathered from the semantic neighborhood to decide when to stop the search. Two semantic stopping criteria are proposed and experimentally assessed in Geometric Semantic Genetic Programming (GSGP) and in the Semantic Learning Machine (SLM) algorithm (the equivalent algorithm for neural networks). The experiments are performed on real-world high-dimensional regression datasets. The results show that the proposed semantic stopping criteria are able to detect stopping points that result in a competitive generalization for both GSGP and SLM. This approach also yields computationally efficient algorithms as it allows the evolution of neural networks in less than 3 seconds on average, and of GP trees in at most 10 seconds. The usage of the proposed semantic stopping criteria in conjunction with the computation of optimal mutation/learning steps also results in small trees and neural networks.
1
0
0
1
0
0
Deep Generative Learning via Variational Gradient Flow
We propose a general framework to learn deep generative models via \textbf{V}ariational \textbf{Gr}adient Fl\textbf{ow} (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the $f$-divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including Kullback-Leibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered `logD' divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with state-of-the-art GANs.
1
0
0
1
0
0
Warming trend in cold season of the Yangtze River Delta and its correlation with Siberian high
Based on the meteorological data from 1960 to 2010, we investigated the temperature variation in the Yangtze River Delta (YRD) by using Mann-Kendall nonparametric test and explored the correlation between the temperature in the cold season and the Siberian high intensity (SHI) by using correlation analysis method. The main results are that (a) the temperature in YRD increased remarkably during the study period, (b) the warming trend in the cold season made the higher contribution to annual warming, and (c) there was a significant negative correlation between the temperature in the cold season and the SHI.
0
0
0
1
0
0
Modeling and Quantifying the Forces Driving Online Video Popularity Evolution
Video popularity is an essential reference for optimizing resource allocation and video recommendation in online video services. However, there is still no convincing model that can accurately depict a video's popularity evolution. In this paper, we propose a dynamic popularity model by modeling the video information diffusion process driven by various forms of recommendation. Through fitting the model with real traces collected from a practical system, we can quantify the strengths of the recommendation forces. Such quantification can lead to characterizing video popularity patterns, user behaviors and recommendation strategies, which is illustrated by a case study of TV episodes.
1
0
0
0
0
0
Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games
Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective communication protocol, we introduce a Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by experienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.
1
0
0
0
0
0
Measurement of the Lorentz-FitzGerald Body Contraction
A complete foundational discussion of acceleration in context of Special Relativity is presented. Acceleration allows the measurement of a Lorentz-FitzGerald body contraction created. It is argued that in the back scattering of a probing laser beam from a relativistic flying electron cloud mirror generated by an ultra-intense laser pulse, a first measurement of a Lorentz-FitzGerald body contraction is feasible.
0
1
0
0
0
0
Information Directed Sampling for Stochastic Bandits with Graph Feedback
We consider stochastic multi-armed bandit problems with graph feedback, where the decision maker is allowed to observe the neighboring actions of the chosen action. We allow the graph structure to vary with time and consider both deterministic and Erdős-Rényi random graph models. For such a graph feedback model, we first present a novel analysis of Thompson sampling that leads to tighter performance bound than existing work. Next, we propose new Information Directed Sampling based policies that are graph-aware in their decision making. Under the deterministic graph case, we establish a Bayesian regret bound for the proposed policies that scales with the clique cover number of the graph instead of the number of actions. Under the random graph case, we provide a Bayesian regret bound for the proposed policies that scales with the ratio of the number of actions over the expected number of observations per iteration. To the best of our knowledge, this is the first analytical result for stochastic bandits with random graph feedback. Finally, using numerical evaluations, we demonstrate that our proposed IDS policies outperform existing approaches, including adaptions of upper confidence bound, $\epsilon$-greedy and Exp3 algorithms.
1
0
0
1
0
0
Multi-Evidence Filtering and Fusion for Multi-Label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning
Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.
0
0
0
1
0
0
Hausdorff operators on holomorphic Hardy spaces and applications
The aim of this paper is to characterize the nonnegative functions $\varphi$ defined on $(0,\infty)$ for which the Hausdorff operator $$\mathscr H_\varphi f(z)= \int_0^\infty f\left(\frac{z}{t}\right)\frac{\varphi(t)}{t}dt$$ is bounded on the Hardy spaces of the upper half-plane $\mathcal H_a^p(\mathbb C_+)$, $p\in[1,\infty]$. The corresponding operator norms and their applications are also given.
0
0
1
0
0
0
Three-dimensional color code thresholds via statistical-mechanical mapping
Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code on the body-centererd cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D string-like and 2D sheet-like logical operators to be $p^{(1)}_\mathrm{3DCC} \simeq 1.9\%$ and $p^{(2)}_\mathrm{3DCC} \simeq 27.6\%$. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the 4- and 6-body random coupling Ising models.
0
1
0
0
0
0
Does Your Phone Know Your Touch?
This paper explores supervised techniques for continuous anomaly detection from biometric touch screen data. A capacitive sensor array used to mimic a touch screen as used to collect touch and swipe gestures from participants. The gestures are recorded over fixed segments of time, with position and force measured for each gesture. Support Vector Machine, Logistic Regression, and Gaussian mixture models were tested to learn individual touch patterns. Test results showed true negative and true positive scores of over 95% accuracy for all gesture types, with logistic regression models far outperforming the other methods. A more expansive and varied data collection over longer periods of time is needed to determine pragmatic usage of these results.
0
0
0
1
0
0
Nucleus: A Pilot Project
Early in 2016, an environmental scan was conducted by the Research Library Data Working Group for three purposes: 1.) Perform a survey of the data management landscape at Los Alamos National Laboratory in order to identify local gaps in data management services. 2.) Conduct an environmental scan of external institutions to benchmark budgets, infrastructure, and personnel dedicated to data management. 3.) Draft a research data infrastructure model that aligns with the current workflow and classification restrictions at Los Alamos National Laboratory. This report is a summary of those activities and the draft for a pilot data management project.
1
0
0
0
0
0
Non Volatile MoS$_{2}$ Field Effect Transistors Directly Gated By Single Crystalline Epitaxial Ferroelectric
We demonstrate non-volatile, n-type, back-gated, MoS$_{2}$ transistors, placed directly on an epitaxial grown, single crystalline, PbZr$_{0.2}$Ti$_{0.8}$O$_{3}$ (PZT) ferroelectric. The transistors show decent ON current (19 ${\mu}A/{\mu}$m), high on-off ratio (10$^{7}$), and a subthreshold swing of (SS ~ 92 mV/dec) with a 100 nm thick PZT layer as the back gate oxide. Importantly, the ferroelectric polarization can directly control the channel charge, showing a clear anti-clockwise hysteresis. We have selfconsistently confirmed the switching of the ferroelectric and corresponding change in channel current from a direct time-dependent measurement. Our results demonstrate that it is possible to obtain transistor operation directly on polar surfaces and therefore it should be possible to integrate 2D electronics with single crystalline functional oxides.
0
1
0
0
0
0
Fast and Accurate Sparse Coding of Visual Stimuli with a Simple, Ultra-Low-Energy Spiking Architecture
Memristive crossbars have become a popular means for realizing unsupervised and supervised learning techniques. In previous neuromorphic architectures with leaky integrate-and-fire neurons, the crossbar itself has been separated from the neuron capacitors to preserve mathematical rigor. In this work, we sought to simplify the design, creating a fast circuit that consumed significantly lower power at a minimal cost of accuracy. We also showed that connecting the neurons directly to the crossbar resulted in a more efficient sparse coding architecture, and alleviated the need to pre-normalize receptive fields. This work provides derivations for the design of such a network, named the Simple Spiking Locally Competitive Algorithm, or SSLCA, as well as CMOS designs and results on the CIFAR and MNIST datasets. Compared to a non-spiking model which scored 33% on CIFAR-10 with a single-layer classifier, this hardware scored 32% accuracy. When used with a state-of-the-art deep learning classifier, the non-spiking model achieved 82% and our simplified, spiking model achieved 80%, while compressing the input data by 92%. Compared to a previously proposed spiking model, our proposed hardware consumed 99% less energy to do the same work at 21x the throughput. Accuracy held out with online learning to a write variance of 3%, suitable for the often-reported 4-bit resolution required for neuromorphic algorithms; with offline learning to a write variance of 27%; and with read variance to 40%. The proposed architecture's excellent accuracy, throughput, and significantly lower energy usage demonstrate the utility of our innovations.
1
0
0
0
0
0
Astronomy of Cholanaikkan tribe of Kerala
Cholanaikkans are a diminishing tribe of India. With a population of less than 200 members, this tribe living in the reserved forests about 80 km from Kozhikode, it is one of the most isolated tribes. A programme of the Government of Kerala brings some of them to Kozhikode once a year. We studied various aspects of the tribe during such a visit in 2016. We report their science and technology.
0
1
0
0
0
0
Integral Equations and Machine Learning
As both light transport simulation and reinforcement learning are ruled by the same Fredholm integral equation of the second kind, reinforcement learning techniques may be used for photorealistic image synthesis: Efficiency may be dramatically improved by guiding light transport paths by an approximate solution of the integral equation that is learned during rendering. In the light of the recent advances in reinforcement learning for playing games, we investigate the representation of an approximate solution of an integral equation by artificial neural networks and derive a loss function for that purpose. The resulting Monte Carlo and quasi-Monte Carlo methods train neural networks with standard information instead of linear information and naturally are able to generate an arbitrary number of training samples. The methods are demonstrated for applications in light transport simulation.
1
0
0
0
0
0
Experiments on bright field and dark field high energy electron imaging with thick target material
Using a high energy electron beam for the imaging of high density matter with both high spatial-temporal and areal density resolution under extreme states of temperature and pressure is one of the critical challenges in high energy density physics . When a charged particle beam passes through an opaque target, the beam will be scattered with a distribution that depends on the thickness of the material. By collecting the scattered beam either near or off axis, so-called bright field or dark field images can be obtained. Here we report on an electron radiography experiment using 45 MeV electrons from an S-band photo-injector, where scattered electrons, after interacting with a sample, are collected and imaged by a quadrupole imaging system. We achieved a few micrometers (about 4 micrometers) spatial resolution and about 10 micrometers thickness resolution for a silicon target of 300-600 micron thickness. With addition of dark field images that are captured by selecting electrons with large scattering angle, we show that more useful information in determining external details such as outlines, boundaries and defects can be obtained.
0
1
0
0
0
0
A Non-linear Approach to Space Dimension Perception by a Naive Agent
Developmental Robotics offers a new approach to numerous AI features that are often taken as granted. Traditionally, perception is supposed to be an inherent capacity of the agent. Moreover, it largely relies on models built by the system's designer. A new approach is to consider perception as an experimentally acquired ability that is learned exclusively through the analysis of the agent's sensorimotor flow. Previous works, based on H.Poincaré's intuitions and the sensorimotor contingencies theory, allow a simulated agent to extract the dimension of geometrical space in which it is immersed without any a priori knowledge. Those results are limited to infinitesimal movement's amplitude of the system. In this paper, a non-linear dimension estimation method is proposed to push back this limitation.
1
0
0
0
0
0
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Even todays most advanced machine learning models are easily fooled by almost imperceptible perturbations of their inputs. Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models. It is build around the idea that the most comparable robustness measure is the minimum perturbation needed to craft an adversarial example. To this end, Foolbox provides reference implementations of most published adversarial attack methods alongside some new ones, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. Additionally, Foolbox interfaces with most popular deep learning frameworks such as PyTorch, Keras, TensorFlow, Theano and MXNet and allows different adversarial criteria such as targeted misclassification and top-k misclassification as well as different distance measures. The code is licensed under the MIT license and is openly available at this https URL . The most up-to-date documentation can be found at this http URL .
1
0
0
1
0
0
Two-dimensional boron on Pb (110) surface
We simulate boron on Pb(110) surface by using ab initio evolutionary methodology. Interestingly, the two-dimensional (2D) Dirac Pmmn boron can be formed because of good lattice matching. Unexpectedly, by increasing the thickness of 2D boron, a three-bonded graphene-like structure (P2_1/c boron) was revealed to possess double anisotropic Dirac cones. It is 20 meV/atom lower in energy than the Pmmn structure, indicating the most stable 2D boron with particular Dirac cones. The puckered structure of P2_1/c boron results in the peculiar Dirac cones, as well as substantial mechanical anisotropy. The calculated Young's modulus is 320 GPa.nm along zigzag direction, which is comparable with graphene.
0
1
0
0
0
0
SOLAR: Deep Structured Latent Representations for Model-Based Reinforcement Learning
Model-based reinforcement learning (RL) methods can be broadly categorized as global model methods, which depend on learning models that provide sensible predictions in a wide range of states, or local model methods, which iteratively refit simple models that are used for policy improvement. While predicting future states that will result from the current actions is difficult, local model methods only attempt to understand system dynamics in the neighborhood of the current policy, making it possible to produce local improvements without ever learning to predict accurately far into the future. The main idea in this paper is that we can learn representations that make it easy to retrospectively infer simple dynamics given the data from the current policy, thus enabling local models to be used for policy learning in complex systems. To that end, we focus on learning representations with probabilistic graphical model (PGM) structure, which allows us to devise an efficient local model method that infers dynamics from real-world rollouts with the PGM as a global prior. We compare our method to other model-based and model-free RL methods on a suite of robotics tasks, including manipulation tasks on a real Sawyer robotic arm directly from camera images. Videos of our results are available at this https URL
1
0
0
1
0
0
Robust and Fast Decoding of High-Capacity Color QR Codes for Mobile Applications
The use of color in QR codes brings extra data capacity, but also inflicts tremendous challenges on the decoding process due to chromatic distortion, cross-channel color interference and illumination variation. Particularly, we further discover a new type of chromatic distortion in high-density color QR codes, cross-module color interference, caused by the high density which also makes the geometric distortion correction more challenging. To address these problems, we propose two approaches, namely, LSVM-CMI and QDA-CMI, which jointly model these different types of chromatic distortion. Extended from SVM and QDA, respectively, both LSVM-CMI and QDA-CMI optimize over a particular objective function to learn a color classifier. Furthermore, a robust geometric transformation method and several pipeline refinements are proposed to boost the decoding performance for mobile applications. We put forth and implement a framework for high-capacity color QR codes equipped with our methods, called HiQ. To evaluate the performance of HiQ, we collect a challenging large-scale color QR code dataset, CUHK-CQRC, which consists of 5390 high-density color QR code samples. The comparison with the baseline method [2] on CUHK-CQRC shows that HiQ at least outperforms [2] by 188% in decoding success rate and 60% in bit error rate. Our implementation of HiQ in iOS and Android also demonstrates the effectiveness of our framework in real-world applications.
1
0
0
0
0
0
The quest for H$_3^+$ at Neptune: deep burn observations with NASA IRTF iSHELL
Emission from the molecular ion H$_3^+$ is a powerful diagnostic of the upper atmosphere of Jupiter, Saturn, and Uranus, but it remains undetected at Neptune. In search of this emission, we present near-infrared spectral observations of Neptune between 3.93 and 4.00 $\mu$m taken with the newly commissioned iSHELL instrument on the NASA Infrared Telescope Facility in Hawaii, obtained 17-20 August 2017. We spent 15.4 h integrating across the disk of the planet, yet were unable to unambiguously identify any H$_3^+$ line emissions. Assuming a temperature of 550 K, we derive an upper limit on the column integrated density of $1.0^{+1.2}_{-0.8}\times10^{13}$ m$^{-2}$, which is an improvement of 30\% on the best previous observational constraint. This result means that models are over-estimating the density by at least a factor of 5, highlighting the need for renewed modelling efforts. A potential solution is strong vertical mixing of polyatomic neutral species from Neptune's upper stratosphere to the thermosphere, reacting with H$_3^+$, thus greatly reducing the column integrated H$_3^+$ densities. This upper limit also provide constraints on future attempts at detecting H$_3^+$ using the James Webb Space Telescope.
0
1
0
0
0
0
Unreasonable Effectivness of Deep Learning
We show how well known rules of back propagation arise from a weighted combination of finite automata. By redefining a finite automata as a predictor we combine the set of all $k$-state finite automata using a weighted majority algorithm. This aggregated prediction algorithm can be simplified using symmetry, and we prove the equivalence of an algorithm that does this. We demonstrate that this algorithm is equivalent to a form of a back propagation acting in a completely connected $k$-node neural network. Thus the use of the weighted majority algorithm allows a bound on the general performance of deep learning approaches to prediction via known results from online statistics. The presented framework opens more detailed questions about network topology; it is a bridge to the well studied techniques of semigroup theory and applying these techniques to answer what specific network topologies are capable of predicting. This informs both the design of artificial networks and the exploration of neuroscience models.
0
0
0
1
0
0