title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Duality and Universal Transport in a Mixed-Dimension Electrodynamics
We consider a theory of a two-component Dirac fermion localized on a (2+1) dimensional brane coupled to a (3+1) dimensional bulk. Using the fermionic particle-vortex duality, we show that the theory has a strong-weak duality that maps the coupling $e$ to $\tilde e=(8\pi)/e$. We explore the theory at $e^2=8\pi$ where it is self-dual. The electrical conductivity of the theory is a constant independent of frequency. When the system is at finite density and magnetic field at filling factor $\nu=\frac12$, the longitudinal and Hall conductivity satisfies a semicircle law, and the ratio of the longitudinal and Hall thermal electric coefficients is completely determined by the Hall angle. The thermal Hall conductivity is directly related to the thermal electric coefficients.
0
1
0
0
0
0
Beyond similarity assessment: Selecting the optimal model for sequence alignment via the Factorized Asymptotic Bayesian algorithm
Pair Hidden Markov Models (PHMMs) are probabilistic models used for pairwise sequence alignment, a quintessential problem in bioinformatics. PHMMs include three types of hidden states: match, insertion and deletion. Most previous studies have used one or two hidden states for each PHMM state type. However, few studies have examined the number of states suitable for representing sequence data or improving alignment accuracy.We developed a novel method to select superior models (including the number of hidden states) for PHMM. Our method selects models with the highest posterior probability using Factorized Information Criteria (FIC), which is widely utilised in model selection for probabilistic models with hidden variables. Our simulations indicated this method has excellent model selection capabilities with slightly improved alignment accuracy. We applied our method to DNA datasets from 5 and 28 species, ultimately selecting more complex models than those used in previous studies.
0
0
0
1
0
0
Experimental evidence for Glycolaldehyde and Ethylene Glycol formation by surface hydrogenation of CO molecules under dense molecular cloud conditions
This study focuses on the formation of two molecules of astrobiological importance - glycolaldehyde (HC(O)CH2OH) and ethylene glycol (H2C(OH)CH2OH) - by surface hydrogenation of CO molecules. Our experiments aim at simulating the CO freeze-out stage in interstellar dark cloud regions, well before thermal and energetic processing become dominant. It is shown that along with the formation of H2CO and CH3OH - two well established products of CO hydrogenation - also molecules with more than one carbon atom form. The key step in this process is believed to be the recombination of two HCO radicals followed by the formation of a C-C bond. The experimentally established reaction pathways are implemented into a continuous-time random-walk Monte Carlo model, previously used to model the formation of CH3OH on astrochemical time-scales, to study their impact on the solid-state abundances in dense interstellar clouds of glycolaldehyde and ethylene glycol.
0
1
0
0
0
0
New Methods of Enhancing Prediction Accuracy in Linear Models with Missing Data
In this paper, prediction for linear systems with missing information is investigated. New methods are introduced to improve the Mean Squared Error (MSE) on the test set in comparison to state-of-the-art methods, through appropriate tuning of Bias-Variance trade-off. First, the use of proposed Soft Weighted Prediction (SWP) algorithm and its efficacy are depicted and compared to previous works for non-missing scenarios. The algorithm is then modified and optimized for missing scenarios. It is shown that controlled over-fitting by suggested algorithms will improve prediction accuracy in various cases. Simulation results approve our heuristics in enhancing the prediction accuracy.
1
0
0
1
0
0
Revisiting Imidazolium Based Ionic Liquids: Effect of the Conformation Bias of the [NTf$_{2}$] Anion Studied By Molecular Dynamics Simulations
We study ionic liquids composed 1-alkyl-3-methylimidazolium cations and bis(trifluoromethyl-sulfonyl)imide anions ([C$_n$MIm][NTf$_2$]) with varying chain-length $n\!=\!2, 4, 6, 8$ by using molecular dynamics simulations. We show that a reparametrization of the dihedral potentials as well as charges of the [NTf$_2$] anion leads to an improvment of the force field model introduced by Köddermann {\em et al.} [ChemPhysChem, \textbf{8}, 2464 (2007)] (KPL-force field). A crucial advantage of the new parameter set is that the minimum energy conformations of the anion ({\em trans} and {\em gauche}), as deduced from {\em ab initio} calculations and {\sc Raman} experiments, are now both well represented by our model. In addition, the results for [C$_n$MIm][NTf$_2$] show that this modification leads to an even better agreement between experiment and molecular dynamics simulation as demonstrated for densities, diffusion coefficients, vaporization enthalpies, reorientational correlation times, and viscosities. Even though we focused on a better representation of the anion conformation, also the alkyl chain-length dependence of the cation behaves closer to the experiment. We strongly encourage to use the new NGKPL force field for the [NTf$_2$] anion instead of the earlier KPL parameter set for computer simulations aiming to describe the thermodynamics, dynamics and also structure of imidazolium based ionic liquids.
0
1
0
0
0
0
Tick: a Python library for statistical learning, with a particular emphasis on time-dependent modelling
Tick is a statistical learning library for Python~3, with a particular emphasis on time-dependent models, such as point processes, and tools for generalized linear models and survival analysis. The core of the library is an optimization module providing model computational classes, solvers and proximal operators for regularization. tick relies on a C++ implementation and state-of-the-art optimization algorithms to provide very fast computations in a single node multi-core setting. Source code and documentation can be downloaded from this https URL
0
0
0
1
0
0
An energy method for rough partial differential equations
We present a well-posedness and stability result for a class of nondegenerate linear parabolic equations driven by rough paths. More precisely, we introduce a notion of weak solution that satisfies an intrinsic formulation of the equation in a suitable Sobolev space of negative order. Weak solutions are then shown to satisfy the corresponding en- ergy estimates which are deduced directly from the equation. Existence is obtained by showing compactness of a suitable sequence of approximate solutions whereas unique- ness relies on a doubling of variables argument and a careful analysis of the passage to the diagonal. Our result is optimal in the sense that the assumptions on the deterministic part of the equation as well as the initial condition are the same as in the classical PDEs theory.
0
0
1
0
0
0
Sparse Inverse Covariance Estimation for Chordal Structures
In this paper, we consider the Graphical Lasso (GL), a popular optimization problem for learning the sparse representations of high-dimensional datasets, which is well-known to be computationally expensive for large-scale problems. Recently, we have shown that the sparsity pattern of the optimal solution of GL is equivalent to the one obtained from simply thresholding the sample covariance matrix, for sparse graphs under different conditions. We have also derived a closed-form solution that is optimal when the thresholded sample covariance matrix has an acyclic structure. As a major generalization of the previous result, in this paper we derive a closed-form solution for the GL for graphs with chordal structures. We show that the GL and thresholding equivalence conditions can significantly be simplified and are expected to hold for high-dimensional problems if the thresholded sample covariance matrix has a chordal structure. We then show that the GL and thresholding equivalence is enough to reduce the GL to a maximum determinant matrix completion problem and drive a recursive closed-form solution for the GL when the thresholded sample covariance matrix has a chordal structure. For large-scale problems with up to 450 million variables, the proposed method can solve the GL problem in less than 2 minutes, while the state-of-the-art methods converge in more than 2 hours.
0
0
0
1
0
0
Orthogonal free quantum group factors are strongly 1-bounded
We prove that the orthogonal free quantum group factors $\mathcal{L}(\mathbb{F}O_N)$ are strongly $1$-bounded in the sense of Jung. In particular, they are not isomorphic to free group factors. This result is obtained by establishing a spectral regularity result for the edge reversing operator on the quantum Cayley tree associated to $\mathbb{F}O_N$, and combining this result with a recent free entropy dimension rank theorem of Jung and Shlyakhtenko.
0
0
1
0
0
0
Enhanced spin ordering temperature in ultrathin FeTe films grown on a topological insulator
We studied the temperature dependence of the diagonal double-stripe spin order in one and two unit cell thick layers of FeTe grown on the topological insulator Bi_2Te_3 via spin-polarized scanning tunneling microscopy. The spin order persists up to temperatures which are higher than the transition temperature reported for bulk Fe_1+yTe with lowest possible excess Fe content y. The enhanced spin order stability is assigned to a strongly decreased y with respect to the lowest values achievable in bulk crystal growth, and effects due to the interface between the FeTe and the topological insulator. The result is relevant for understanding the recent observation of a coexistence of superconducting correlations and spin order in this system.
0
1
0
0
0
0
High Order Hierarchical Divergence-free Constrained Transport $H(div)$ Finite Element Method for Magnetic Induction Equation
In this paper, we will use the interior functions of an hierarchical basis for high order $BDM_p$ elements to enforce the divergence-free condition of a magnetic field $B$ approximated by the H(div) $BDM_p$ basis. The resulting constrained finite element method can be used to solve magnetic induction equation in MHD equations. The proposed procedure is based on the fact that the scalar $(p-1)$-th order polynomial space on each element can be decomposed as an orthogonal sum of the subspace defined by the divergence of the interior functions of the $p$-th order $BDM_p$ basis and the constant function. Therefore, the interior functions can be used to remove element-wise all higher order terms except the constant in the divergence error of the finite element solution of $B$-field. The constant terms from each element can be then easily corrected using a first order H(div) basis globally. Numerical results for a 3-D magnetic induction equation show the effectiveness of the proposed method in enforcing divergence-free condition of the magnetic field.
0
0
1
0
0
0
REMOTEGATE: Incentive-Compatible Remote Configuration of Security Gateways
Imagine that a malicious hacker is trying to attack a server over the Internet and the server wants to block the attack packets as close to their point of origin as possible. However, the security gateway ahead of the source of attack is untrusted. How can the server block the attack packets through this gateway? In this paper, we introduce REMOTEGATE, a trustworthy mechanism for allowing any party (server) on the Internet to configure a security gateway owned by a second party, at a certain agreed upon reward that the former pays to the latter for its service. We take an interactive incentive-compatible approach, for the case when both the server and the gateway are rational, to devise a protocol that will allow the server to help the security gateway generate and deploy a policy rule that filters the attack packets before they reach the server. The server will reward the gateway only when the latter can successfully verify that it has generated and deployed the correct rule for the issue. This mechanism will enable an Internet-scale approach to improving security and privacy, backed by digital payment incentives.
1
0
0
0
0
0
Distributed Event-Triggered Control for Global Consensus of Multi-Agent Systems with Input Saturation
We consider the global consensus problem for multi-agent systems with input saturation over digraphs. Under a mild connectivity condition that the underlying digraph has a directed spanning tree, we use Lyapunov methods to show that the widely used distributed consensus protocol, which solves the consensus problem for the case without input saturation constraints, also solves the global consensus problem for the case with input saturation constraints. In order to reduce the overall need of communication and system updates, we then propose a distributed event-triggered control law. Global consensus is still realized and Zeno behavior is excluded. Numerical simulations are provided to illustrate the effectiveness of the theoretical results.
0
0
1
0
0
0
Autocommuting probability of a finite group relative to its subgroups
Let $H \subseteq K$ be two subgroups of a finite group $G$ and Aut$(K)$ the automorphism group of $K$. The autocommuting probability of $G$ relative to its subgroups $H$ and $K$, denoted by ${\rm Pr}(H, {\rm Aut}(K))$, is the probability that the autocommutator of a randomly chosen pair of elements, one from $H$ and the other from Aut$(K)$, is equal to the identity element of $G$. In this paper, we study ${\rm Pr}(H, {\rm Aut}(K))$ through a generalization.
0
0
1
0
0
0
Total variation regularization with variable Lebesgue prior
This work proposes the variable exponent Lebesgue modular as a replacement for the 1-norm in total variation (TV) regularization. It allows the exponent to vary with spatial location and thus enables users to locally select whether to preserve edges or smooth intensity variations. In contrast to earlier work using TV-like methods with variable exponents, the exponent function is here computed offline as a fixed parameter of the final optimization problem, resulting in a convex goal functional. The obtained formulas for the convex conjugate and the proximal operators are simple in structure and can be evaluated very efficiently, an important property for practical usability. Numerical results with variable $L^p$ TV prior in denoising and tomography problems on synthetic data compare favorably to total generalized variation (TGV) and TV.
0
0
1
0
0
0
Radio observations confirm young stellar populations in local analogues to $z\sim5$ Lyman break galaxies
We present radio observations at 1.5 GHz of 32 local objects selected to reproduce the physical properties of $z\sim5$ star-forming galaxies. We also report non-detections of five such sources in the sub-millimetre. We find a radio-derived star formation rate which is typically half that derived from H$\alpha$ emission for the same objects. These observations support previous indications that we are observing galaxies with a young dominant stellar population, which has not yet established a strong supernova-driven synchrotron continuum. We stress caution when applying star formation rate calibrations to stellar populations younger than 100 Myr. We calibrate the conversions for younger galaxies, which are dominated by a thermal radio emission component. We improve the size constraints for these sources, compared to previous unresolved ground-based optical observations. Their physical size limits indicate very high star formation rate surface densities, several orders of magnitude higher than the local galaxy population. In typical nearby galaxies, this would imply the presence of galaxy-wide winds. Given the young stellar populations, it is unclear whether a mechanism exists in our sources that can deposit sufficient kinetic energy into the interstellar medium to drive such outflows.
0
1
0
0
0
0
Sparse Deep Neural Network Exact Solutions
Deep neural networks (DNNs) have emerged as key enablers of machine learning. Applying larger DNNs to more diverse applications is an important challenge. The computations performed during DNN training and inference are dominated by operations on the weight matrices describing the DNN. As DNNs incorporate more layers and more neurons per layers, these weight matrices may be required to be sparse because of memory limitations. Sparse DNNs are one possible approach, but the underlying theory is in the early stages of development and presents a number of challenges, including determining the accuracy of inference and selecting nonzero weights for training. Associative array algebra has been developed by the big data community to combine and extend database, matrix, and graph/network concepts for use in large, sparse data problems. Applying this mathematics to DNNs simplifies the formulation of DNN mathematics and reveals that DNNs are linear over oscillating semirings. This work uses associative array DNNs to construct exact solutions and corresponding perturbation models to the rectified linear unit (ReLU) DNN equations that can be used to construct test vectors for sparse DNN implementations over various precisions. These solutions can be used for DNN verification, theoretical explorations of DNN properties, and a starting point for the challenge of sparse training.
0
0
0
1
0
0
Variation formulas for an extended Gompf invariant
In 1998, R. Gompf defined a homotopy invariant $\theta_G$ of oriented 2-plane fields in 3-manifolds. This invariant is defined for oriented 2-plane fields $\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\xi)$ is a torsion element of $H^2(M;\mathbb{Z})$. In this article, we define an extension of the Gompf invariant for all compact oriented 3-manifolds with boundary and we study its iterated variations under Lagrangian-preserving surgeries. It follows that the extended Gompf invariant is a degree two invariant with respect to a suitable finite type invariant theory.
0
0
1
0
0
0
A SAT+CAS Approach to Finding Good Matrices: New Examples and Counterexamples
We enumerate all circulant good matrices with odd orders divisible by 3 up to order 70. As a consequence of this we find a previously overlooked set of good matrices of order 27 and a new set of good matrices of order 57. We also find that circulant good matrices do not exist in the orders 51, 63, and 69, thereby finding three new counterexamples to the conjecture that such matrices exist in all odd orders. Additionally, we prove a new relationship between the entries of good matrices and exploit this relationship in our enumeration algorithm. Our method applies the SAT+CAS paradigm of combining computer algebra functionality with modern SAT solvers to efficiently search large spaces which are specified by both algebraic and logical constraints.
1
0
0
0
0
0
An Exploration of Mimic Architectures for Residual Network Based Spectral Mapping
Spectral mapping uses a deep neural network (DNN) to map directly from noisy speech to clean speech. Our previous study found that the performance of spectral mapping improves greatly when using helpful cues from an acoustic model trained on clean speech. The mapper network learns to mimic the input favored by the spectral classifier and cleans the features accordingly. In this study, we explore two new innovations: we replace a DNN-based spectral mapper with a residual network that is more attuned to the goal of predicting clean speech. We also examine how integrating long term context in the mimic criterion (via wide-residual biLSTM networks) affects the performance of spectral mapping compared to DNNs. Our goal is to derive a model that can be used as a preprocessor for any recognition system; the features derived from our model are passed through the standard Kaldi ASR pipeline and achieve a WER of 9.3%, which is the lowest recorded word error rate for CHiME-2 dataset using only feature adaptation.
1
0
0
0
0
0
Deep Neural Networks to Enable Real-time Multimessenger Astrophysics
Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field, there is a pressing need to increase the depth and speed of the gravitational wave algorithms that have enabled these groundbreaking discoveries. To contribute to this effort, we introduce Deep Filtering, a new highly scalable method for end-to-end time-series signal processing, based on a system of two deep convolutional neural networks, which we designed for classification and regression to rapidly detect and estimate parameters of signals in highly noisy time-series data streams. We demonstrate a novel training scheme with gradually increasing noise levels, and a transfer learning procedure between the two networks. We showcase the application of this method for the detection and parameter estimation of gravitational waves from binary black hole mergers. Our results indicate that Deep Filtering significantly outperforms conventional machine learning techniques, achieves similar performance compared to matched-filtering while being several orders of magnitude faster thus allowing real-time processing of raw big data with minimal resources. More importantly, Deep Filtering extends the range of gravitational wave signals that can be detected with ground-based gravitational wave detectors. This framework leverages recent advances in artificial intelligence algorithms and emerging hardware architectures, such as deep-learning-optimized GPUs, to facilitate real-time searches of gravitational wave sources and their electromagnetic and astro-particle counterparts.
1
1
0
0
0
0
Contribution of cellular automata to the understanding of corrosion phenomena
We present a stochastic CA modelling approach of corrosion based on spatially separated electrochemical half-reactions, diffusion, acido-basic neutralization in solution and passive properties of the oxide layers. Starting from different initial conditions, a single framework allows one to describe generalised corrosion, localised corrosion, reactive and passive surfaces, including occluded corrosion phenomena as well. Spontaneous spatial separation of anodic and cathodic zones is associated with bare metal and passivated metal on the surface. This separation is also related to local acidification of the solution. This spontaneous change is associated with a much faster corrosion rate. Material morphology is closely related to corrosion kinetics, which can be used for technological applications.
0
1
0
0
0
0
Involutive bordered Floer homology
We give a bordered extension of involutive HF-hat and use it to give an algorithm to compute involutive HF-hat for general 3-manifolds. We also explain how the mapping class group action on HF-hat can be computed using bordered Floer homology. As applications, we prove that involutive HF-hat satisfies a surgery exact triangle and compute HFI-hat of the branched double covers of all 10-crossing knots.
0
0
1
0
0
0
Orbital Evolution, Activity, and Mass Loss of Comet C/1995 O1 (Hale-Bopp). I. Close Encounter with Jupiter in Third Millennium BCE and Effects of Outgassing on the Comet's Motion and Physical Properties
This comprehensive study of comet C/1995 O1 focuses first on investigating its orbital motion over a period of 17.6 yr (1993-2010). The comet is suggested to have approached Jupiter to 0.005 AU on -2251 November 7, in general conformity with Marsden's (1999) proposal of a Jovian encounter nearly 4300 yr ago. The variations of sizable nongravitational effects with heliocentric distance correlate with the evolution of outgassing, asymmetric relative to perihelion. The future orbital period will shorten to ~1000 yr because of orbital-cascade resonance effects. We find that the sublimation curves of parent molecules are fitted with the type of a law used for the nongravitational acceleration, determine their orbit-integrated mass loss, and conclude that the share of water ice was at most 57%, and possibly less than 50%, of the total outgassed mass. Even though organic parent molecules (many still unidentified) had very low abundances relative to water individually, their high molar mass and sheer number made them, summarily, important potential mass contributors to the total production of gas. The mass loss of dust per orbit exceeded that of water ice by a factor of ~12, a dust loading high enough to imply a major role for heavy organic molecules of low volatility in accelerating the minuscule dust particles in the expanding halos to terminal velocities as high as 0.7 km s^{-1}. In Part II, the comet's nucleus will be modeled as a compact cluster of massive fragments to conform to the integrated nongravitational effect.
0
1
0
0
0
0
A Note on Property Testing Sum of Squares and Multivariate Polynomial Interpolation
In this paper, we investigate property testing whether or not a degree d multivariate poly- nomial is a sum of squares or is far from a sum of squares. We show that if we require that the property tester always accepts YES instances and uses random samples, $n^{\Omega(d)}$ samples are required, which is not much fewer than it would take to completely determine the polynomial. To prove this lower bound, we show that with high probability, multivariate polynomial in- terpolation matches arbitrary values on random points and the resulting polynomial has small norm. We then consider a particular polynomial which is non-negative yet not a sum of squares and use pseudo-expectation values to prove it is far from being a sum of squares.
1
0
0
0
0
0
Closed-form mathematical expressions for the exponentiated Cauchy-Rayleigh distribution
The Cauchy-Rayleigh (CR) distribution has been successfully used to describe asymmetric and heavy-tail events from radar imagery. Employing such model to describe lifetime data may then seem attractive, but some drawbacks arise: its probability density function does not cover non-modal behavior as well as the CR hazard rate function (hrf) assumes only one form. To outperform this difficulty, we introduce an extended CR model, called exponentiated Cauchy-Rayleigh (ECR) distribution. This model has two parameters and hrf with decreasing, decreasing-increasing-decreasing and upside-down bathtub forms. In this paper, several closed-form mathematical expressions for the ECR model are proposed: median, mode, probability weighted, log-, incomplete and order statistic moments and Fisher information matrix. We propose three estimation procedures for the ECR parameters: maximum likelihood (ML), bias corrected ML and percentile-based methods. A simulation study is done to assess the performance of estimators. An application to survival time of heart problem patients illustrates the usefulness of the ECR model. Results point out that the ECR distribution may outperform classical lifetime models, such as the gamma, Birnbaun-Saunders, Weibull and log-normal laws, before heavy-tail data.
0
0
1
1
0
0
HTEM data improve 3D modelling of aquifers in Paris Basin, France
In Paris Basin, we evaluate how HTEM data complement the usual borehole, geological and deep seismic data used for modelling aquifer geometries. With these traditional data, depths between ca. 50 to 300m are often relatively ill-constrained, as most boreholes lie within the first tens of meters of the underground and petroleum seismic is blind shallower than ca. 300m. We have fully reprocessed and re-inverted 540km of flight lines of a SkyTEM survey of 2009, acquired on a 40x12km zone with 400m line spacing. The resistivity model is first "calibrated" with respect to ca. 50 boreholes available on the study area. Overall, the correlation between EM resistivity models and the hydrogeological horizons clearly shows that the geological units in which the aquifers are developed almost systematically correspond to relative increase of resistivity, whatever the "background" resistivity environment and the lithology of the aquifer. In 3D Geomodeller software, this allows interpreting 11 aquifer/aquitar layers along the flight lines and then jointly interpolating them in 3D along with the borehole data. The resulting model displays 3D aquifer geometries consistent with the SIGES "reference" regional hydrogeological model and improves it in between the boreholes and on the 50-300m depth range.
0
1
0
0
0
0
Implementing GraphQL as a Query Language for Deductive Databases in SWI-Prolog Using DCGs, Quasi Quotations, and Dicts
The methods to access large relational databases in a distributed system are well established: the relational query language SQL often serves as a language for data access and manipulation, and in addition public interfaces are exposed using communication protocols like REST. Similarly to REST, GraphQL is the query protocol of an application layer developed by Facebook. It provides a unified interface between the client and the server for data fetching and manipulation. Using GraphQL's type system, it is possible to specify data handling of various sources and to combine, e.g., relational with NoSQL databases. In contrast to REST, GraphQL provides a single API endpoint and supports flexible queries over linked data. GraphQL can also be used as an interface for deductive databases. In this paper, we give an introduction of GraphQL and a comparison to REST. Using language features recently added to SWI-Prolog 7, we have developed the Prolog library GraphQL.pl, which implements the GraphQL type system and query syntax as a domain-specific language with the help of definite clause grammars (DCG), quasi quotations, and dicts. Using our library, the type system created for a deductive database can be validated, while the query system provides a unified interface for data access and introspection.
1
0
0
0
0
0
Social Network based Short-Term Stock Trading System
This paper proposes a novel adaptive algorithm for the automated short-term trading of financial instrument. The algorithm adopts a semantic sentiment analysis technique to inspect the Twitter posts and to use them to predict the behaviour of the stock market. Indeed, the algorithm is specifically developed to take advantage of both the sentiment and the past values of a certain financial instrument in order to choose the best investment decision. This allows the algorithm to ensure the maximization of the obtainable profits by trading on the stock market. We have conducted an investment simulation and compared the performance of our proposed with a well-known benchmark (DJTATO index) and the optimal results, in which an investor knows in advance the future price of a product. The result shows that our approach outperforms the benchmark and achieves the performance score close to the optimal result.
1
0
0
0
0
1
New Determinant Expressions of the Multi-indexed Orthogonal Polynomials in Discrete Quantum Mechanics
The multi-indexed orthogonal polynomials (the Meixner, little $q$-Jacobi (Laguerre), ($q$-)Racah, Wilson, Askey-Wilson types) satisfying second order difference equations were constructed in discrete quantum mechanics. They are polynomials in the sinusoidal coordinates $\eta(x)$ ($x$ is the coordinate of quantum system) and expressed in terms of the Casorati determinants whose matrix elements are functions of $x$ at various points. By using shape invariance properties, we derive various equivalent determinant expressions, especially those whose matrix elements are functions of the same point $x$. Except for the ($q$-)Racah case, they can be expressed in terms of $\eta$ only, without explicit $x$-dependence.
0
1
1
0
0
0
Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.
1
0
0
1
0
0
Towards an Understanding of the Effects of Augmented Reality Games on Disaster Management
Location-based augmented reality games have entered the mainstream with the nearly overnight success of Niantic's Pokémon Go. Unlike traditional video games, the fact that players of such games carry out actions in the external, physical world to accomplish in-game objectives means that the large-scale adoption of such games motivate people, en masse, to do things and go places they would not have otherwise done in unprecedented ways. The social implications of such mass-mobilisation of individual players are, in general, difficult to anticipate or characterise, even for the short-term. In this work, we focus on disaster relief, and the short- and long-term implications that a proliferation of AR games like Pokémon Go, may have in disaster-prone regions of the world. We take a distributed cognition approach and focus on one natural disaster-prone region of New Zealand, the city of Wellington.
1
0
0
0
0
0
Integrating a Global Induction Mechanism into a Sequent Calculus
Most interesting proofs in mathematics contain an inductive argument which requires an extension of the LK-calculus to formalize. The most commonly used calculi for induction contain a separate rule or axiom which reduces the valid proof theoretic properties of the calculus. To the best of our knowledge, there are no such calculi which allow cut-elimination to a normal form with the subformula property, i.e. every formula occurring in the proof is a subformula of the end sequent. Proof schemata are a variant of LK-proofs able to simulate induction by linking proofs together. There exists a schematic normal form which has comparable proof theoretic behaviour to normal forms with the subformula property. However, a calculus for the construction of proof schemata does not exist. In this paper, we introduce a calculus for proof schemata and prove soundness and completeness with respect to a fragment of the inductive arguments formalizable in Peano arithmetic.
1
0
0
0
0
0
An analytic resolution to the competition between Lyman-Werner radiation and metal winds in direct collapse black hole hosts
A near pristine atomic cooling halo close to a star forming galaxy offers a natural pathway for forming massive direct collapse black hole (DCBH) seeds which could be the progenitors of the $z>6$ redshift quasars. The close proximity of the haloes enables a sufficient Lyman-Werner flux to effectively dissociate H$_2$ in the core of the atomic cooling halo. A mild background may also be required to delay star formation in the atomic cooling halo, often attributed to distant background galaxies. In this letter we investigate the impact of metal enrichment from both the background galaxies and the close star forming galaxy under extremely unfavourable conditions such as instantaneous metal mixing. We find that within the time window of DCBH formation, the level of enrichment never exceeds the critical threshold (Z$_{cr} \sim 1 \times 10^{-5} \ \rm Z_{\odot})$, and attains a maximum metallicity of Z $\sim 2 \times 10^{-6} \ \rm Z_{\odot}$. As the system evolves, the metallicity eventually exceeds the critical threshold, long after the DCBH has formed.
0
1
0
0
0
0
Driving Interactive Graph Exploration Using 0-Dimensional Persistent Homology Features
Graphs are commonly used to encode relationships among entities, yet, their abstractness makes them incredibly difficult to analyze. Node-link diagrams are a popular method for drawing graphs. Classical techniques for the node-link diagrams include various layout methods that rely on derived information to position points, which often lack interactive exploration functionalities; and force-directed layouts, which ignore global structures of the graph. This paper addresses the graph drawing challenge by leveraging topological features of a graph as derived information for interactive graph drawing. We first discuss extracting topological features from a graph using persistent homology. We then introduce an interactive persistence barcodes to study the substructures of a force-directed graph layout; in particular, we add contracting and repulsing forces guided by the 0-dimensional persistent homology features. Finally, we demonstrate the utility of our approach across three datasets.
1
0
0
0
0
0
Identification of Conduit Countries and Community Structures in the Withholding Tax Networks
Due to economic globalization, each country's economic law, including tax laws and tax treaties, has been forced to work as a single network. However, each jurisdiction (country or region) has not made its economic law under the assumption that its law functions as an element of one network, so it has brought unexpected results. We thought that the results are exactly international tax avoidance. To contribute to the solution of international tax avoidance, we tried to investigate which part of the network is vulnerable. Specifically, focusing on treaty shopping, which is one of international tax avoidance methods, we attempt to identified which jurisdiction are likely to be used for treaty shopping from tax liabilities and the relationship between jurisdictions which are likely to be used for treaty shopping and others. For that purpose, based on withholding tax rates imposed on dividends, interest, and royalties by jurisdictions, we produced weighted multiple directed graphs, computed the centralities and detected the communities. As a result, we clarified the jurisdictions that are likely to be used for treaty shopping and pointed out that there are community structures. The results of this study suggested that fewer jurisdictions need to introduce more regulations for prevention of treaty abuse worldwide.
0
0
0
0
0
1
A comprehensive study of batch construction strategies for recurrent neural networks in MXNet
In this work we compare different batch construction methods for mini-batch training of recurrent neural networks. While popular implementations like TensorFlow and MXNet suggest a bucketing approach to improve the parallelization capabilities of the recurrent training process, we propose a simple ordering strategy that arranges the training sequences in a stochastic alternatingly sorted way. We compare our method to sequence bucketing as well as various other batch construction strategies on the CHiME-4 noisy speech recognition corpus. The experiments show that our alternated sorting approach is able to compete both in training time and recognition performance while being conceptually simpler to implement.
1
0
0
1
0
0
On a class of shift-invariant subspaces of the Drury-Arveson space
In the Drury-Arveson space, we consider the subspace of functions whose Taylor coefficients are supported in the complement of a set $Y\subset\mathbb{N}^d$ with the property that $Y+e_j\subset Y$ for all $j=1,\dots,d$. This is an easy example of shift-invariant subspace, which can be considered as a RKHS in is own right, with a kernel that can be explicitely calculated. Moreover, every such a space can be seen as an intersection of kernels of Hankel operators, whose symbols can be explicity calcuated as well. Finally, this is the right space on which Drury's inequality can be optimally adapted to a sub-family of the commuting and contractive operators originally considered by Drury.
0
0
1
0
0
0
Airborne gamma-ray spectroscopy for modeling cosmic radiation and effective dose in the lower atmosphere
In this paper we present the results of a $\sim$5 hour airborne gamma-ray survey carried out over the Tyrrhenian sea in which the height range (77-3066) m has been investigated. Gamma-ray spectroscopy measurements have been performed by using the AGRS_16L detector, a module of four 4L NaI(Tl) crystals. The experimental setup was mounted on the Radgyro, a prototype aircraft designed for multisensorial acquisitions in the field of proximal remote sensing. By acquiring high-statistics spectra over the sea (i.e. in the absence of signals having geological origin) and by spanning a wide spectrum of altitudes it has been possible to split the measured count rate into a constant aircraft component and a cosmic component exponentially increasing with increasing height. The monitoring of the count rate having pure cosmic origin in the >3 MeV energy region allowed to infer the background count rates in the $^{40}$K, $^{214}$Bi and $^{208}$Tl photopeaks, which need to be subtracted in processing airborne gamma-ray data in order to estimate the potassium, uranium and thorium abundances in the ground. Moreover, a calibration procedure has been carried out by implementing the CARI-6P and EXPACS dosimetry tools, according to which the annual cosmic effective dose to human population has been linearly related to the measured cosmic count rates.
0
1
0
0
0
0
Search for axions in streaming dark matter
A new search strategy for the detection of the elusive dark matter (DM) axion is proposed. The idea is based on streaming DM axions, whose flux might get temporally enormously enhanced due to gravitational lensing. This can happen if the Sun or some planet (including the Moon) is found along the direction of a DM stream propagating towards the Earth location. The experimental requirements to the axion haloscope are a wide-band performance combined with a fast axion rest mass scanning mode, which are feasible. Once both conditions have been implemented in a haloscope, the axion search can continue parasitically almost as before. Interestingly, some new DM axion detectors are operating wide-band by default. In order not to miss the actually unpredictable timing of a potential short duration signal, a network of co-ordinated axion antennae is required, preferentially distributed world-wide. The reasoning presented here for the axions applies to some degree also to any other DM candidates like the WIMPs.
0
1
0
0
0
0
Faster integer and polynomial multiplication using cyclotomic coefficient rings
We present an algorithm that computes the product of two n-bit integers in O(n log n (4\sqrt 2)^{log^* n}) bit operations. Previously, the best known bound was O(n log n 6^{log^* n}). We also prove that for a fixed prime p, polynomials in F_p[X] of degree n may be multiplied in O(n log n 4^{log^* n}) bit operations; the previous best bound was O(n log n 8^{log^* n}).
1
0
0
0
0
0
Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks
We study the stochastic multi-armed bandit (MAB) problem in the presence of side-observations across actions that occur as a result of an underlying network structure. In our model, a bipartite graph captures the relationship between actions and a common set of unknowns such that choosing an action reveals observations for the unknowns that it is connected to. This models a common scenario in online social networks where users respond to their friends' activity, thus providing side information about each other's preferences. Our contributions are as follows: 1) We derive an asymptotic lower bound (with respect to time) as a function of the bi-partite network structure on the regret of any uniformly good policy that achieves the maximum long-term average reward. 2) We propose two policies - a randomized policy; and a policy based on the well-known upper confidence bound (UCB) policies - both of which explore each action at a rate that is a function of its network position. We show, under mild assumptions, that these policies achieve the asymptotic lower bound on the regret up to a multiplicative factor, independent of the network structure. Finally, we use numerical examples on a real-world social network and a routing example network to demonstrate the benefits obtained by our policies over other existing policies.
1
0
0
1
0
0
Verifying Safety of Functional Programs with Rosette/Unbound
The goal of unbounded program verification is to discover an inductive invariant that safely over-approximates all possible program behaviors. Functional languages featuring higher order and recursive functions become more popular due to the domain-specific needs of big data analytics, web, and security. We present Rosette/Unbound, the first program verifier for Racket exploiting the automated constrained Horn solver on its backend. One of the key features of Rosette/Unbound is the ability to synchronize recursive computations over the same inputs allowing to verify programs that iterate over unbounded data streams multiple times. Rosette/Unbound is successfully evaluated on a set of non-trivial recursive and higher order functional programs.
1
0
0
0
0
0
Extended Formulations for Polytopes of Regular Matroids
We present a simple proof of the fact that the base (and independence) polytope of a rank $n$ regular matroid over $m$ elements has an extension complexity $O(mn)$.
1
0
1
0
0
0
Multiscale Change-point Segmentation: Beyond Step Functions
Modern multiscale type segmentation methods are known to detect multiple change-points with high statistical accuracy, while allowing for fast computation. Underpinning theory has been developed mainly for models that assume the signal as a piecewise constant function. In this paper this will be extended to certain function classes beyond such step functions in a nonparametric regression setting, revealing certain multiscale segmentation methods as robust to deviation from such piecewise constant functions. Our main finding is the adaptation over such function classes for a universal thresholding, which includes bounded variation functions, and (piecewise) Hölder functions of smoothness order $ 0 < \alpha \le1$ as special cases. From this we derive statistical guarantees on feature detection in terms of jumps and modes. Another key finding is that these multiscale segmentation methods perform nearly (up to a log-factor) as well as the oracle piecewise constant segmentation estimator (with known jump locations), and the best piecewise constant approximants of the (unknown) true signal. Theoretical findings are examined by various numerical simulations.
0
0
1
1
0
0
Data Motif-based Proxy Benchmarks for Big Data and AI Workloads
For the architecture community, reasonable simulation time is a strong requirement in addition to performance data accuracy. However, emerging big data and AI workloads are too huge at binary size level and prohibitively expensive to run on cycle-accurate simulators. The concept of data motif, which is identified as a class of units of computation performed on initial or intermediate data, is the first step towards building proxy benchmark to mimic the real-world big data and AI workloads. However, there is no practical way to construct a proxy benchmark based on the data motifs to help simulation-based research. In this paper, we embark on a study to bridge the gap between data motif and a practical proxy benchmark. We propose a data motif-based proxy benchmark generating methodology by means of machine learning method, which combine data motifs with different weights to mimic the big data and AI workloads. Furthermore, we implement various data motifs using light-weight stacks and apply the methodology to five real-world workloads to construct a suite of proxy benchmarks, considering the data types, patterns, and distributions. The evaluation results show that our proxy benchmarks shorten the execution time by 100s times on real systems while maintaining the average system and micro-architecture performance data accuracy above 90%, even changing the input data sets or cluster configurations. Moreover, the generated proxy benchmarks reflect consistent performance trends across different architectures. To facilitate the community, we will release the proxy benchmarks on the project homepage this http URL.
1
0
0
0
0
0
The neighborhood lattice for encoding partial correlations in a Hilbert space
Neighborhood regression has been a successful approach in graphical and structural equation modeling, with applications to learning undirected and directed graphical models. We extend these ideas by defining and studying an algebraic structure called the neighborhood lattice based on a generalized notion of neighborhood regression. We show that this algebraic structure has the potential to provide an economic encoding of all conditional independence statements in a Gaussian distribution (or conditional uncorrelatedness in general), even in the cases where no graphical model exists that could "perfectly" encode all such statements. We study the computational complexity of computing these structures and show that under a sparsity assumption, they can be computed in polynomial time, even in the absence of the assumption of perfectness to a graph. On the other hand, assuming perfectness, we show how these neighborhood lattices may be "graphically" computed using the separation properties of the so-called partial correlation graph. We also draw connections with directed acyclic graphical models and Bayesian networks. We derive these results using an abstract generalization of partial uncorrelatedness, called partial orthogonality, which allows us to use algebraic properties of projection operators on Hilbert spaces to significantly simplify and extend existing ideas and arguments. Consequently, our results apply to a wide range of random objects and data structures, such as random vectors, data matrices, and functions.
1
0
1
1
0
0
The 2-adic complexity of a class of binary sequences with almost optimal autocorrelation
Pseudo-random sequences with good statistical property, such as low autocorrelation, high linear complexity and large 2-adic complexity, have been applied in stream cipher. In general, it is difficult to give both the linear complexity and 2-adic complexity of a periodic binary sequence. Cai and Ding \cite{Cai Ying} gave a class of sequences with almost optimal autocorrelation by constructing almost difference sets. Wang \cite{Wang Qi} proved that one type of those sequences by Cai and Ding has large linear complexity. Sun et al. \cite{Sun Yuhua} showed that another type of sequences by Cai and Ding has also large linear complexity. Additionally, Sun et al. also generalized the construction by Cai and Ding using $d$-form function with difference-balanced property. In this paper, we first give the detailed autocorrelation distribution of the sequences was generalized from Cai and Ding \cite{Cai Ying} by Sun et al. \cite{Sun Yuhua}. Then, inspired by the method of Hu \cite{Hu Honggang}, we analyse their 2-adic complexity and give a lower bound on the 2-adic complexity of these sequences. Our result show that the 2-adic complexity of these sequences is at least $N-\mathrm{log}_2\sqrt{N+1}$ and that it reach $N-1$ in many cases, which are large enough to resist the rational approximation algorithm (RAA) for feedback with carry shift registers (FCSRs).
1
0
1
0
0
0
Nesterov's Smoothing Technique and Minimizing Differences of Convex Functions for Hierarchical Clustering
A bilevel hierarchical clustering model is commonly used in designing optimal multicast networks. In this paper, we consider two different formulations of the bilevel hierarchical clustering problem, a discrete optimization problem which can be shown to be NP-hard. Our approach is to reformulate the problem as a continuous optimization problem by making some relaxations on the discreteness conditions. Then Nesterov's smoothing technique and a numerical algorithm for minimizing differences of convex functions called the DCA are applied to cope with the nonsmoothness and nonconvexity of the problem. Numerical examples are provided to illustrate our method.
0
0
1
0
0
0
Minimal solutions to generalized Lambda-semiflows and gradient flows in metric spaces
Generalized Lambda-semiflows are an abstraction of semiflows with non-periodic solutions, for which there may be more than one solution corresponding to given initial data. A select class of solutions to generalized Lambda-semiflows is introduced. It is proved that such minimal solutions are unique corresponding to given ranges and generate all other solutions by time reparametrization. Special qualities of minimal solutions are shown. The concept of minimal solutions is applied to gradient flows in metric spaces and generalized semiflows. Generalized semiflows have been introduced by Ball.
0
0
1
0
0
0
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
We consider variants of trust-region and cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve $ \epsilon $-approximate second-order optimality which have shown to be tight. Our Hessian approximation conditions constitute a major relaxation over the existing ones in the literature. Consequently, we are able to show that such mild conditions allow for the construction of the approximate Hessian through various random sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.
1
0
0
1
0
0
$G 1$-smooth splines on quad meshes with 4-split macro-patch elements
We analyze the space of differentiable functions on a quad-mesh $\cM$, which are composed of 4-split spline macro-patch elements on each quadrangular face. We describe explicit transition maps across shared edges, that satisfy conditions which ensure that the space of differentiable functions is ample on a quad-mesh of arbitrary topology. These transition maps define a finite dimensional vector space of $G^{1}$ spline functions of bi-degree $\le (k,k)$ on each quadrangular face of $\cM$. We determine the dimension of this space of $G^{1}$ spline functions for $k$ big enough and provide explicit constructions of basis functions attached respectively to vertices, edges and faces. This construction requires the analysis of the module of syzygies of univariate b-spline functions with b-spline function coefficients. New results on their generators and dimensions are provided. Examples of bases of $G^{1}$ splines of small degree for simple topological surfaces are detailed and illustrated by parametric surface constructions.
0
0
1
0
0
0
BanglaLekha-Isolated: A Comprehensive Bangla Handwritten Character Dataset
Bangla handwriting recognition is becoming a very important issue nowadays. It is potentially a very important task specially for Bangla speaking population of Bangladesh and West Bengal. By keeping that in our mind we are introducing a comprehensive Bangla handwritten character dataset named BanglaLekha-Isolated. This dataset contains Bangla handwritten numerals, basic characters and compound characters. This dataset was collected from multiple geographical location within Bangladesh and includes sample collected from a variety of aged groups. This dataset can also be used for other classification problems i.e: gender, age, district. This is the largest dataset on Bangla handwritten characters yet.
1
0
0
0
0
0
Estimates for maximal functions associated to hypersurfaces in $\Bbb R^3$ with height $h<2:$ Part I
In this article, we continue the study of the problem of $L^p$-boundedness of the maximal operator $M$ associated to averages along isotropic dilates of a given, smooth hypersurface $S$ of finite type in 3-dimensional Euclidean space. An essentially complete answer to this problem had been given about seven years ago by the last named two authors in joint work with M. Kempe for the case where the height h of the given surface is at least two. In the present article, we turn to the case $h<2.$ More precisely, in this Part I, we study the case where $h<2,$ assuming that $S$ is contained in a sufficiently small neighborhood of a given point $x^0\in S$ at which both principal curvatures of $S$ vanish. Under these assumptions and a natural transversality assumption, we show that, as in the case where $h\ge 2,$ the critical Lebesgue exponent for the boundedness of $M$ remains to be $p_c=h,$ even though the proof of this result turns out to require new methods, some of which are inspired by the more recent work by the last named two authors on Fourier restriction to S. Results on the case where $h<2$ and exactly one principal curvature of $S$ does not vanish at $x^0$ will appear elsewhere.
0
0
1
0
0
0
Using Inertial Sensors for Position and Orientation Estimation
In recent years, MEMS inertial sensors (3D accelerometers and 3D gyroscopes) have become widely available due to their small size and low cost. Inertial sensor measurements are obtained at high sampling rates and can be integrated to obtain position and orientation information. These estimates are accurate on a short time scale, but suffer from integration drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and models. In this tutorial we focus on the signal processing aspects of position and orientation estimation using inertial sensors. We discuss different modeling choices and a selected number of important algorithms. The algorithms include optimization-based smoothing and filtering as well as computationally cheaper extended Kalman filter and complementary filter implementations. The quality of their estimates is illustrated using both experimental and simulated data.
1
0
0
0
0
0
Heisenberg equation for a nonrelativistic particle on a hypersurface: from the centripetal force to a curvature induced force
In classical mechanics, a nonrelativistic particle constrained on an $N-1$ curved hypersurface embedded in $N$ flat space experiences the centripetal force only. In quantum mechanics, the situation is totally different for the presence of the geometric potential. We demonstrate that the motion of the quantum particle is "driven" by not only the the centripetal force, but also a curvature induced force proportional to the Laplacian of the mean curvature, which is fundamental in the interface physics, causing curvature driven interface evolution.
0
1
0
0
0
0
On constraining projections of future climate using observations and simulations from multiple climate models
A new Bayesian framework is presented that can constrain projections of future climate using historical observations by exploiting robust estimates of emergent relationships between multiple climate models. We argue that emergent relationships can be interpreted as constraints on model inadequacy, but that projections may be biased if we do not account for internal variability in climate model projections. We extend the previously proposed coexchangeable framework to account for natural variability in the Earth system and internal variability simulated by climate models. A detailed theoretical comparison with previous multi-model projection frameworks is provided. The proposed framework is applied to projecting surface temperature in the Arctic at the end of the 21st century. A subset of available climate models are selected in order to satisfy the assumptions of the framework. All available initial condition runs from each model are utilized in order maximize the utility of the data. Projected temperatures in some regions are more than 2C lower when constrained by historical observations. The uncertainty about the climate response is reduced by up to 30% where strong constraints exist.
0
0
0
1
0
0
Higher order molecular organisation as a source of biological function
Molecular interactions have widely been modelled as networks. The local wiring patterns around molecules in molecular networks are linked with their biological functions. However, networks model only pairwise interactions between molecules and cannot explicitly and directly capture the higher order molecular organisation, such as protein complexes and pathways. Hence, we ask if hypergraphs (hypernetworks), that directly capture entire complexes and pathways along with protein-protein interactions (PPIs), carry additional functional information beyond what can be uncovered from networks of pairwise molecular interactions. The mathematical formalism of a hypergraph has long been known, but not often used in studying molecular networks due to the lack of sophisticated algorithms for mining the underlying biological information hidden in the wiring patterns of molecular systems modelled as hypernetworks. We propose a new, multi-scale, protein interaction hypernetwork model that utilizes hypergraphs to capture different scales of protein organization, including PPIs, protein complexes and pathways. In analogy to graphlets, we introduce hypergraphlets, small, connected, non-isomorphic, induced sub-hypergraphs of a hypergraph, to quantify the local wiring patterns of these multi-scale molecular hypergraphs and to mine them for new biological information. We apply them to model the multi-scale protein networks of baker yeast and human and show that the higher order molecular organisation captured by these hypergraphs is strongly related to the underlying biology. Importantly, we demonstrate that our new models and data mining tools reveal different, but complementary biological information compared to classical PPI networks. We apply our hypergraphlets to successfully predict biological functions of uncharacterised proteins.
0
0
0
0
1
0
The Massive CO White Dwarf in the Symbiotic Recurrent Nova RS Ophiuchi
If accreting white dwarfs (WD) in binary systems are to produce type Ia supernovae (SNIa), they must grow to nearly the Chandrasekhar mass and ignite carbon burning. Proving conclusively that a WD has grown substantially since its birth is a challenging task. Slow accretion of hydrogen inevitably leads to the erosion, rather than the growth of WDs. Rapid hydrogen accretion does lead to growth of a helium layer, due to both decreased degeneracy and the inhibition of mixing of the accreted hydrogen with the underlying WD. However, until recently, simulations of helium-accreting WDs all claimed to show the explosive ejection of a helium envelope once it exceeded $\sim 10^{-1}\, \rm M_{\odot}$. Because CO WDs cannot be born with masses in excess of $\sim 1.1\, \rm M_{\odot}$, any such object, in excess of $\sim 1.2\, \rm M_{\odot}$, must have grown substantially. We demonstrate that the WD in the symbiotic nova RS Oph is in the mass range 1.2-1.4\,M$_{\odot}$. We compare UV spectra of RS Oph with those of novae with ONe WDs, and with novae erupting on CO WDs. The RS Oph WD is clearly made of CO, demonstrating that it has grown substantially since birth. It is a prime candidate to eventually produce an SNIa.
0
1
0
0
0
0
Semi-equivelar maps on the torus are Archimedean
If the face-cycles at all the vertices in a map on a surface are of same type then the map is called semi-equivelar. There are eleven types of Archimedean tilings on the plane. All the Archimedean tilings are semi-equivelar maps. If a map $X$ on the torus is a quotient of an Archimedean tiling on the plane then the map $X$ is semi-equivelar. We show that each semi-equivelar map on the torus is a quotient of an Archimedean tiling on the plane. Vertex-transitive maps are semi-equivelar maps. We know that four types of semi-equivelar maps on the torus are always vertex-transitive and there are examples of other seven types of semi-equivelar maps which are not vertex-transitive. We show that the number of ${\rm Aut}(Y)$-orbits of vertices for any semi-equivelar map $Y$ on the torus is at most six. In fact, the number of orbits is at most three except one type of semi-equivelar maps. Our bounds on the number of orbits are sharp.
0
0
1
0
0
0
Dynamics of Porous Dust Aggregates and Gravitational Instability of Their Disk
We consider the dynamics of porous icy dust aggregates in a turbulent gas disk and investigate the stability of the disk. We evaluate the random velocity of porous dust aggregates by considering their self-gravity, collisions, aerodynamic drag, turbulent stirring and scattering due to gas. We extend our previous work by introducing the anisotropic velocity dispersion and the relaxation time of the random velocity. We find the minimum mass solar nebular model to be gravitationally unstable if the turbulent viscosity parameter $\alpha$ is less than about $4 \times 10^{-3}$. The upper limit of $\alpha$ for the onset of gravitational instability is derived as a function of the disk parameters. We discuss the implications of the gravitational instability for planetesimal formation.
0
1
0
0
0
0
Localization landscape theory of disorder in semiconductors II: Urbach tails of disordered quantum well layers
Urbach tails in semiconductors are often associated to effects of compositional disorder. The Urbach tail observed in InGaN alloy quantum wells of solar cells and LEDs by biased photocurrent spectroscopy is shown to be characteristic of the ternary alloy disorder. The broadening of the absorption edge observed for quantum wells emitting from violet to green (indium content ranging from 0 to 28\%) corresponds to a typical Urbach energy of 20~meV. A 3D absorption model is developed based on a recent theory of disorder-induced localization which provides the effective potential seen by the localized carriers without having to resort to the solution of the Schrödinger equation in a disordered potential. This model incorporating compositional disorder accounts well for the experimental broadening of the Urbach tail of the absorption edge. For energies below the Urbach tail of the InGaN quantum wells, type-II well-to-barrier transitions are observed and modeled. This contribution to the below bandgap absorption is particularly efficient in near-UV emitting quantum wells. When reverse biasing the device, the well-to-barrier below bandgap absorption exhibits a red shift, while the Urbach tail corresponding to the absorption within the quantum wells is blue shifted, due to the partial compensation of the internal piezoelectric fields by the external bias. The good agreement between the measured Urbach tail and its modeling by the new localization theory demonstrates the applicability of the latter to compositional disorder effects in nitride semiconductors.
0
1
0
0
0
0
Global teleconnectivity structures of the El Niño-Southern Oscillation and large volcanic eruptions -- An evolving network perspective
Recent work has provided ample evidence that global climate dynamics at time-scales between multiple weeks and several years can be severely affected by the episodic occurrence of both, internal (climatic) and external (non-climatic) perturbations. Here, we aim to improve our understanding on how regional to local disruptions of the "normal" state of the global surface air temperature field affect the corresponding global teleconnectivity structure. Specifically, we present an approach to quantify teleconnectivity based on different characteristics of functional climate network analysis. Subsequently, we apply this framework to study the impacts of different phases of the El Niño-Southern Oscillation (ENSO) as well as the three largest volcanic eruptions since the mid 20th century on the dominating spatiotemporal co-variability patterns of daily surface air temperatures. Our results confirm the existence of global effects of ENSO which result in episodic breakdowns of the hierarchical organization of the global temperature field. This is associated with the emergence of strong teleconnections. At more regional scales, similar effects are found after major volcanic eruptions. Taken together, the resulting time-dependent patterns of network connectivity allow a tracing of the spatial extents of the dominating effects of both types of climate disruptions. We discuss possible links between these observations and general aspects of atmospheric circulation.
0
1
0
0
0
0
Unbiased Shrinkage Estimation
Shrinkage estimation usually reduces variance at the cost of bias. But when we care only about some parameters of a model, I show that we can reduce variance without incurring bias if we have additional information about the distribution of covariates. In a linear regression model with homoscedastic Normal noise, I consider shrinkage estimation of the nuisance parameters associated with control variables. For at least three control variables and exogenous treatment, I establish that the standard least-squares estimator is dominated with respect to squared-error loss in the treatment effect even among unbiased estimators and even when the target parameter is low-dimensional. I construct the dominating estimator by a variant of James-Stein shrinkage in a high-dimensional Normal-means problem. It can be interpreted as an invariant generalized Bayes estimator with an uninformative (improper) Jeffreys prior in the target parameter.
0
0
1
1
0
0
Characterizing The Influence of Continuous Integration. Empirical Results from 250+ Open Source and Proprietary Projects
Continuous integration (CI) tools integrate code changes by automatically compiling, building, and executing test cases upon submission of code changes. Use of CI tools is getting increasingly popular, yet how proprietary projects reap the benefits of CI remains unknown. To investigate the influence of CI on software development, we analyze 150 open source software (OSS) projects, and 123 proprietary projects. For OSS projects, we observe the expected benefits after CI adoption, e.g., improvements in bug and issue resolution. However, for the proprietary projects, we cannot make similar observations. Our findings indicate that only adoption of CI might not be enough to the improve software development process. CI can be effective for software development if practitioners use CI's feedback mechanism efficiently, by applying the practice of making frequent commits. For our set of proprietary projects we observe practitioners commit less frequently, and hence not use CI effectively for obtaining feedback on the submitted code changes. Based on our findings we recommend industry practitioners to adopt the best practices of CI to reap the benefits of CI tools for example, making frequent commits.
1
0
0
0
0
0
Why Abeta42 Is Much More Toxic Than Abeta40
Amyloid precursor with 770 amino acids dimerizes and aggregates, as do its c terminal 99 amino acids and amyloid 40,42 amino acids fragments. The titled question has been discussed extensively, and here it is addressed further using thermodynamic scaling theory to analyze mutational trends in structural factors and kinetics. Special attention is given to Family Alzheimer's Disease mutations outside amyloid 42. The scaling analysis is connected to extensive docking simulations which included membranes, thereby confirming their results and extending them to Amyloid precursor.
0
0
0
0
1
0
A Polynomial Time Algorithm for Spatio-Temporal Security Games
An ever-important issue is protecting infrastructure and other valuable targets from a range of threats from vandalism to theft to piracy to terrorism. The "defender" can rarely afford the needed resources for a 100% protection. Thus, the key question is, how to provide the best protection using the limited available resources. We study a practically important class of security games that is played out in space and time, with targets and "patrols" moving on a real line. A central open question here is whether the Nash equilibrium (i.e., the minimax strategy of the defender) can be computed in polynomial time. We resolve this question in the affirmative. Our algorithm runs in time polynomial in the input size, and only polylogarithmic in the number of possible patrol locations (M). Further, we provide a continuous extension in which patrol locations can take arbitrary real values. Prior work obtained polynomial-time algorithms only under a substantial assumption, e.g., a constant number of rounds. Further, all these algorithms have running times polynomial in M, which can be very large.
1
0
0
0
0
0
TIDBD: Adapting Temporal-difference Step-sizes Through Stochastic Meta-descent
In this paper, we introduce a method for adapting the step-sizes of temporal difference (TD) learning. The performance of TD methods often depends on well chosen step-sizes, yet few algorithms have been developed for setting the step-size automatically for TD learning. An important limitation of current methods is that they adapt a single step-size shared by all the weights of the learning system. A vector step-size enables greater optimization by specifying parameters on a per-feature basis. Furthermore, adapting parameters at different rates has the added benefit of being a simple form of representation learning. We generalize Incremental Delta Bar Delta (IDBD)---a vectorized adaptive step-size method for supervised learning---to TD learning, which we name TIDBD. We demonstrate that TIDBD is able to find appropriate step-sizes in both stationary and non-stationary prediction tasks, outperforming ordinary TD methods and TD methods with scalar step-size adaptation; we demonstrate that it can differentiate between features which are relevant and irrelevant for a given task, performing representation learning; and we show on a real-world robot prediction task that TIDBD is able to outperform ordinary TD methods and TD methods augmented with AlphaBound and RMSprop.
0
0
0
1
0
0
Enhanced clustering tendency of Cu-impurities with a number of oxygen vacancies in heavy carbon-loaded TiO2 - the bulk and surface morphologies
The over threshold carbon-loadings (~50 at.%) of initial TiO2-hosts and posterior Cu-sensitization (~7 at.%) was made using pulsed ion-implantation technique in sequential mode with 1 hour vacuum-idle cycle between sequential stages of embedding. The final Cx-TiO2:Cu samples were qualified using XPS wide-scan elemental analysis, core-levels and valence band mappings. The results obtained were discussed on the theoretic background employing DFT-calculations. The combined XPS and DFT analysis allows to establish and prove the final formula of the synthesized samples as Cx-TiO2:[Cu+][Cu2+] for the bulk and Cx-TiO2:[Cu+][Cu0] for thin-films. It was demonstrated the in the mode of heavy carbon-loadings the remaining majority of neutral C-C bonds (sp3-type) is dominating and only a lack of embedded carbon is fabricating the O-C=O clusters. No valence base-band width altering was established after sequential carbon-copper modification of the atomic structure of initial TiO2-hosts except the dominating majority of Cu 3s states after Cu-sensitization. The crucial role of neutral carbon low-dimensional impurities as the precursors for the new phases growth was shown for Cu-sensitized Cx-TiO2 intermediate-state hosts.
0
1
0
0
0
0
On Controllable Abundance Of Saturated-input Linear Discrete Systems
Several theorems on the volume computing of the polyhedron spanned by a n-dimensional vector set with the finite-interval parameters are presented and proved firstly, and then are used in the analysis of the controllable regions of the linear discrete time-invariant systems with saturated inputs. A new concept and continuous measure on the control ability, control efficiency of the input variables, and the diversity of the control laws, named as the controllable abundance, is proposed based on the volume computing of the regions and is applied to the actuator placing and configuring problems, the optimizing problems of dynamics and kinematics of the controlled plants, etc.. The numerical experiments show the effectiveness of the new concept and methods for investigating and optimizing the control ability and efficiency.
1
0
1
0
0
0
Localization and dynamics of sulfur-oxidizing microbes in natural sediment
Organic material in anoxic sediment represents a globally significant carbon reservoir that acts to stabilize Earth's atmospheric composition. The dynamics by which microbes organize to consume this material remain poorly understood. Here we observe the collective dynamics of a microbial community, collected from a salt marsh, as it comes to steady state in a two-dimensional ecosystem, covered by flowing water and under constant illumination. Microbes form a very thin front at the oxic-anoxic interface that moves towards the surface with constant velocity and comes to rest at a fixed depth. Fronts are stable to all perturbations while in the sediment, but develop bioconvective plumes in water. We observe the transient formation of parallel fronts. We model these dynamics to understand how they arise from the coupling between metabolism, aerotaxis, and diffusion. These results identify the typical timescale for the oxygen flux and penetration depth to reach steady state.
0
1
0
0
0
0
Probabilistic Surfel Fusion for Dense LiDAR Mapping
With the recent development of high-end LiDARs, more and more systems are able to continuously map the environment while moving and producing spatially redundant information. However, none of the previous approaches were able to effectively exploit this redundancy in a dense LiDAR mapping problem. In this paper, we present a new approach for dense LiDAR mapping using probabilistic surfel fusion. The proposed system is capable of reconstructing a high-quality dense surface element (surfel) map from spatially redundant multiple views. This is achieved by a proposed probabilistic surfel fusion along with a geometry considered data association. The proposed surfel data association method considers surface resolution as well as high measurement uncertainty along its beam direction which enables the mapping system to be able to control surface resolution without introducing spatial digitization. The proposed fusion method successfully suppresses the map noise level by considering measurement noise caused by laser beam incident angle and depth distance in a Bayesian filtering framework. Experimental results with simulated and real data for the dense surfel mapping prove the ability of the proposed method to accurately find the canonical form of the environment without further post-processing.
1
0
0
0
0
0
Quantum Paramagnet and Frustrated Quantum Criticality in a Spin-One Diamond Lattice Antiferromagnet
Motivated by the proposal of topological quantum paramagnet in the diamond lattice antiferromagnet NiRh$_2$O$_4$, we propose a minimal model to describe the magnetic interaction and properties of the diamond material with the spin-one local moments. Our model includes the first and second neighbor Heisenberg interactions as well as a local single-ion spin anisotropy that is allowed by the spin-one nature of the local moment and the tetragonal symmetry of the system. We point out that there exists a quantum phase transition from a trivial quantum paramagnet when the single-ion spin anisotropy is dominant to the magnetic ordered states when the exchange is dominant. Due to the frustrated spin interaction, the magnetic excitation in the quantum paramagnetic state supports extensively degenerate band minima in the spectra. As the system approaches the transition, extensively degenerate bosonic modes become critical at the criticality, giving rise to unusual magnetic properties. Our phase diagram and experimental predictions for different phases provide a guildeline for the identification of the ground state for NiRh$_2$O$_4$. Although our results are fundamentally different from the proposal of topological quantum paramagnet, it represents interesting possibilities for spin-one diamond lattice antiferromagnets.
0
1
0
0
0
0
Characterizations of minimal dominating sets and the well-dominated property in lexicographic product graphs
A graph is said to be well-dominated if all its minimal dominating sets are of the same size. The class of well-dominated graphs forms a subclass of the well studied class of well-covered graphs. While the recognition problem for the class of well-covered graphs is known to be co-NP-complete, the recognition complexity of well-dominated graphs is open. In this paper we introduce the notion of an irreducible dominating set, a variant of dominating set generalizing both minimal dominating sets and minimal total dominating sets. Based on this notion, we characterize the family of minimal dominating sets in a lexicographic product of two graphs and derive a characterization of the well-dominated lexicographic product graphs. As a side result motivated by this study, we give a polynomially testable characterization of well-dominated graphs with domination number two, and show, more generally, that well-dominated graphs can be recognized in polynomial time in any class of graphs with bounded domination number. Our results include a characterization of dominating sets in lexicographic product graphs, which generalizes the expression for the domination number of such graphs following from works of Zhang et al. (2011) and of Šumenjak et al. (2012).
1
0
1
0
0
0
To the Acceleration of Charged Particles with Travelling Laser Focus
We describe here the latest results of calculations with FlexPDE code of wake-fields induced by the bunch in micro-structures. These structures, illuminated by swept laser bust, serve for acceleration of charged particles. The basis of the scheme is a fast sweeping device for the laser bunch. After sweeping, the laser bunch has a slope ~45o with respect to the direction of propagation. So the every cell of the microstructure becomes excited locally only for the moment when the particles are there. Self-consistent parameters of collider based on this idea allow consideration this type of collider as a candidate for the near-future accelerator era.
0
1
0
0
0
0
Affiliation networks with an increasing degree sequence
Affiliation network is one kind of two-mode social network with two different sets of nodes (namely, a set of actors and a set of social events) and edges representing the affiliation of the actors with the social events. Although a number of statistical models are proposed to analyze affiliation networks, the asymptotic behaviors of the estimator are still unknown or have not been properly explored. In this paper, we study an affiliation model with the degree sequence as the exclusively natural sufficient statistic in the exponential family distributions. We establish the uniform consistency and asymptotic normality of the maximum likelihood estimator when the numbers of actors and events both go to infinity. Simulation studies and a real data example demonstrate our theoretical results.
0
0
1
1
0
0
Coarse Grained Parallel Selection
We analyze the running time of the Saukas-Song algorithm for selection on a coarse grained multicomputer without expressing the running time in terms of communication rounds. This shows that while in the best case the Saukas-Song algorithm runs in asymptotically optimal time, in general it does not. We propose other algorithms for coarse grained selection that have optimal expected running time.
1
0
0
0
0
0
An IoT Analytics Embodied Agent Model based on Context-Aware Machine Learning
Agent-based Internet of Things (IoT) applications have recently emerged as applications that can involve sensors, wireless devices, machines and software that can exchange data and be accessed remotely. Such applications have been proposed in several domains including health care, smart cities and agriculture. However, despite their increased adoption, deploying these applications in specific settings has been very challenging because of the complex static and dynamic variability of the physical devices such as sensors and actuators, the software application behavior and the environment in which the application is embedded. In this paper, we propose a modeling approach for IoT analytics based on learning embodied agents (i.e. situated agents). The approach involves: (i) a variability model of IoT embodied agents; (ii) feedback evaluative machine learning; and (iii) reconfiguration of a group of agents in accordance with environmental context. The proposed approach advances the state of the art in that it facilitates the development of Agent-based IoT applications by explicitly capturing their complex and dynamic variabilities and supporting their self-configuration based on an context-aware and machine learning-based approach.
1
0
0
0
0
0
Aggregation of Classifiers: A Justifiable Information Granularity Approach
In this study, we introduce a new approach to combine multi-classifiers in an ensemble system. Instead of using numeric membership values encountered in fixed combining rules, we construct interval membership values associated with each class prediction at the level of meta-data of observation by using concepts of information granules. In the proposed method, uncertainty (diversity) of findings produced by the base classifiers is quantified by interval-based information granules. The discriminative decision model is generated by considering both the bounds and the length of the obtained intervals. We select ten and then fifteen learning algorithms to build a heterogeneous ensemble system and then conducted the experiment on a number of UCI datasets. The experimental results demonstrate that the proposed approach performs better than the benchmark algorithms including six fixed combining methods, one trainable combining method, AdaBoost, Bagging, and Random Subspace.
1
0
0
1
0
0
FRET-based nanocommunication with luciferase and channelrhodopsin molecules for in-body medical systems
The paper is concerned with an in-body system gathering data for medical purposes. It is focused on communication between the following two components of the system: liposomes gathering the data inside human veins and a detector collecting the data from liposomes. Foerster Resonance Energy Transfer (FRET) is considered as a mechanism for communication between the system components. The usage of bioluminescent molecules as an energy source for generating FRET signals is suggested and the performance evaluation of this approach is given. FRET transmission may be initiated without an aid of an external laser, which is crucial in case of communication taking place inside of human body. It is also shown how to solve the problem of FRET signals recording. The usage of channelrhodopsin molecules, able to receive FRET signals and convert them into voltage, is proposed. The communication system is modelled with molecular structures and spectral characteristics of the proposed molecules and further validated by using Monte Carlo computer simulations, calculating the data throughput and the bit error rate.
0
0
0
0
1
0
FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search
We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for \textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search system for ultra-high dimensional datasets on a single machine, that does not require similarity computations and is tailored for high-performance computing platforms. By leveraging a LSH style randomized indexing procedure and combining it with several principled techniques, such as reservoir sampling, recent advances in one-pass minwise hashing, and count based estimations, we reduce the computational and parallelization costs of similarity search, while retaining sound theoretical guarantees. We evaluate FLASH on several real, high-dimensional datasets from different domains, including text, malicious URL, click-through prediction, social networks, etc. Our experiments shed new light on the difficulties associated with datasets having several million dimensions. Current state-of-the-art implementations either fail on the presented scale or are orders of magnitude slower than FLASH. FLASH is capable of computing an approximate k-NN graph, from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than 10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam dataset, using brute-force ($n^2D$), will require at least 20 teraflops. We provide CPU and GPU implementations of FLASH for replicability of our results.
1
0
0
0
0
0
Neural Sequence Model Training via $α$-divergence Minimization
We propose a new neural sequence model training method in which the objective function is defined by $\alpha$-divergence. We demonstrate that the objective function generalizes the maximum-likelihood (ML)-based and reinforcement learning (RL)-based objective functions as special cases (i.e., ML corresponds to $\alpha \to 0$ and RL to $\alpha \to1$). We also show that the gradient of the objective function can be considered a mixture of ML- and RL-based objective gradients. The experimental results of a machine translation task show that minimizing the objective function with $\alpha > 0$ outperforms $\alpha \to 0$, which corresponds to ML-based methods.
1
0
0
1
0
0
Output Range Analysis for Deep Neural Networks
Deep neural networks (NN) are extensively used for machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, these deep NNs are also been deployed in high-assurance applications. Thus, there is a pressing need for developing techniques to verify neural networks to check whether certain user-expected properties are satisfied. In this paper, we study a specific verification problem of computing a guaranteed range for the output of a deep neural network given a set of inputs represented as a convex polyhedron. Range estimation is a key primitive for verifying deep NNs. We present an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set. In contrast to recently proposed "monolithic" optimization approaches, we use local gradient descent to repeatedly find and eliminate local minima of the function. The final global optimum is certified using a mixed integer programming instance. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.
1
0
0
1
0
0
Projection Theorems of Divergences and Likelihood Maximization Methods
Projection theorems of divergences enable us to find reverse projection of a divergence on a specific statistical model as a forward projection of the divergence on a different but rather "simpler" statistical model, which, in turn, results in solving a system of linear equations. Reverse projection of divergences are closely related to various estimation methods such as the maximum likelihood estimation or its variants in robust statistics. We consider projection theorems of three parametric families of divergences that are widely used in robust statistics, namely the Rényi divergences (or the Cressie-Reed power divergences), density power divergences, and the relative $\alpha$-entropy (or the logarithmic density power divergences). We explore these projection theorems from the usual likelihood maximization approach and from the principle of sufficiency. In particular, we show the equivalence of solving the estimation problems by the projection theorems of the respective divergences and by directly solving the corresponding estimating equations. We also derive the projection theorem for the density power divergences.
0
0
1
1
0
0
Optimal Timing in Dynamic and Robust Attacker Engagement During Advanced Persistent Threats
Advanced persistent threats (APTs) are stealthy attacks which make use of social engineering and deception to give adversaries insider access to networked systems. Against APTs, active defense technologies aim to create and exploit information asymmetry for defenders. In this paper, we study a scenario in which a powerful defender uses honeynets for active defense in order to observe an attacker who has penetrated the network. Rather than immediately eject the attacker, the defender may elect to gather information. We introduce an undiscounted, infinite-horizon Markov decision process on a continuous state space in order to model the defender's problem. We find a threshold of information that the defender should gather about the attacker before ejecting him. Then we study the robustness of this policy using a Stackelberg game. Finally, we simulate the policy for a conceptual network. Our results provide a quantitative foundation for studying optimal timing for attacker engagement in network defense.
1
0
0
0
0
0
The Mismeasure of Mergers: Revised Limits on Self-interacting Dark Matter in Merging Galaxy Clusters
In an influential recent paper, Harvey et al (2015) derive an upper limit to the self-interaction cross section of dark matter ($\sigma_{\rm DM} < 0.47$ cm$^2$/g at 95\% confidence) by averaging the dark matter-galaxy offsets in a sample of merging galaxy clusters. Using much more comprehensive data on the same clusters, we identify several substantial errors in their offset measurements. Correcting these errors relaxes the upper limit on $\sigma_{\rm DM}$ to $\lesssim 2$ cm$^2$/g, following the Harvey et al prescription for relating offsets to cross sections in a simple solid body scattering model. Furthermore, many clusters in the sample violate the assumptions behind this prescription, so even this revised upper limit should be used with caution. Although this particular sample does not tightly constrain self-interacting dark matter models when analyzed this way, we discuss how merger ensembles may be used more effectively in the future. We conclude that errors inherent in using single-band imaging to identify mass and light peaks do not necessarily average out in a sample of this size, particularly when a handful of substructures constitute a majority of the weight in the ensemble.
0
1
0
0
0
0
International crop trade networks: The impact of shocks and cascades
Analyzing available FAO data from 176 countries over 21 years, we observe an increase of complexity in the international trade of maize, rice, soy, and wheat. A larger number of countries play a role as producers or intermediaries, either for trade or food processing. In consequence, we find that the trade networks become more prone to failure cascades caused by exogenous shocks. In our model, countries compensate for demand deficits by imposing export restrictions. To capture these, we construct higher-order trade dependency networks for the different crops and years. These networks reveal hidden dependencies between countries and allow to discuss policy implications.
0
0
0
0
0
1
Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes
Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
1
0
0
0
0
0
Gene regulatory networks: a primer in biological processes and statistical modelling
Modelling gene regulatory networks not only requires a thorough understanding of the biological system depicted but also the ability to accurately represent this system from a mathematical perspective. Throughout this chapter, we aim to familiarise the reader with the biological processes and molecular factors at play in the process of gene expression regulation.We first describe the different interactions controlling each step of the expression process, from transcription to mRNA and protein decay. In the second section, we provide statistical tools to accurately represent this biological complexity in the form of mathematical models. Amongst other considerations, we discuss the topological properties of biological networks, the application of deterministic and stochastic frameworks and the quantitative modelling of regulation. We particularly focus on the use of such models for the simulation of expression data that can serve as a benchmark for the testing of network inference algorithms.
0
0
0
1
1
0
Mathematical Knowledge and the Role of an Observer: Ontological and epistemological aspects
As David Berlinski writes (1997), the existence and nature of mathematics is a more compelling and far deeper problem than any of the problems raised by mathematics itself. Here we analyze the essence of mathematics making the main emphasis on mathematics as an advanced system of knowledge. This knowledge consists of structures and represents structures, existence of which depends on observers in a nonstandard way. Structural nature of mathematics explains its reasonable effectiveness.
0
0
1
0
0
0
Persuasive Technology For Human Development: Review and Case Study
Technology is an extremely potent tool that can be leveraged for human development and social good. Owing to the great importance of environment and human psychology in driving human behavior, and the ubiquity of technology in modern life, there is a need to leverage the insights and capabilities of both fields together for nudging people towards a behavior that is optimal in some sense (personal or social). In this regard, the field of persuasive technology, which proposes to infuse technology with appropriate design and incentives using insights from psychology, behavioral economics, and human-computer interaction holds a lot of promise. Whilst persuasive technology is already being developed and is at play in many commercial applications, it can have the great social impact in the field of Information and Communication Technology for Development (ICTD) which uses Information and Communication Technology (ICT) for human developmental ends such as education and health. In this paper we will explore what persuasive technology is and how it can be used for the ends of human development. To develop the ideas in a concrete setting, we present a case study outlining how persuasive technology can be used for human development in Pakistan, a developing South Asian country, that suffers from many of the problems that plague typical developing country.
1
0
0
0
0
0
Variable Prioritization in Nonlinear Black Box Methods: A Genetic Association Case Study
The central aim in this paper is to address variable selection questions in nonlinear and nonparametric regression. Motivated by statistical genetics, where nonlinear interactions are of particular interest, we introduce a novel and interpretable way to summarize the relative importance of predictor variables. Methodologically, we develop the "RelATive cEntrality" (RATE) measure to prioritize candidate genetic variants that are not just marginally important, but whose associations also stem from significant covarying relationships with other variants in the data. We illustrate RATE through Bayesian Gaussian process regression, but the methodological innovations apply to other "black box" methods. It is known that nonlinear models often exhibit greater predictive accuracy than linear models, particularly for phenotypes generated by complex genetic architectures. With detailed simulations and two real data association mapping studies, we show that applying RATE enables an explanation for this improved performance.
0
0
0
1
1
0
Activit{é} motrice des truies en groupes dans les diff{é}rents syst{è}mes de logement
Assessment of the motor activity of group-housed sows in commercial farms. The objective of this study was to specify the level of motor activity of pregnant sows housed in groups in different housing systems. Eleven commercial farms were selected for this study. Four housing systems were represented: small groups of five to seven sows (SG), free access stalls (FS) with exercise area, electronic sow feeder with a stable group (ESFsta) or a dynamic group (ESFdyn). Ten sows in mid-gestation were observed in each farm. The observations of motor activity were made for 6 hours at the first meal or at the start of the feeding sequence, two consecutive days and at regular intervals of 4 minutes. The results show that the motor activity of group-housed sows depends on the housing system. The activity is higher with the ESFdyn system (standing: 55.7%), sows are less active in the SG system (standing: 26.5%), and FS system is intermediate. The distance traveled by sows in ESF system is linked to a larger area available. Thus, sows travel an average of 362 m $\pm$ 167 m in the ESFdyn system with an average available surface of 446 m${}^2$ whereas sows in small groups travel 50 m $\pm$ 15 m for 15 m${}^2$ available.
0
0
0
0
1
0
Linking High-Energy Cosmic Particles by Black-Hole Jets Embedded in Large-Scale Structures
The origin of ultrahigh-energy cosmic rays (UHECRs) is a half-century old enigma (Linsley 1963). The mystery has been deepened by an intriguing coincidence: over ten orders of magnitude in energy, the energy generation rates of UHECRs, PeV neutrinos, and isotropic sub-TeV gamma rays are comparable, which hints at a grand-unified picture (Murase and Waxman 2016). Here we report that powerful black hole jets in aggregates of galaxies can supply the common origin of all of these phenomena. Once accelerated by a jet, low-energy cosmic rays confined in the radio lobe are adiabatically cooled; higher-energy cosmic rays leaving the source interact with the magnetized cluster environment and produce neutrinos and gamma rays; the highest-energy particles escape from the host cluster and contribute to the observed cosmic rays above 100 PeV. The model is consistent with the spectrum, composition, and isotropy of the observed UHECRs, and also explains the IceCube neutrinos and the non-blazar component of the Fermi gamma-ray background, assuming a reasonable energy output from black hole jets in clusters.
0
1
0
0
0
0
Quantifying and suppressing ranking bias in a large citation network
It is widely recognized that citation counts for papers from different fields cannot be directly compared because different scientific fields adopt different citation practices. Citation counts are also strongly biased by paper age since older papers had more time to attract citations. Various procedures aim at suppressing these biases and give rise to new normalized indicators, such as the relative citation count. We use a large citation dataset from Microsoft Academic Graph and a new statistical framework based on the Mahalanobis distance to show that the rankings by well known indicators, including the relative citation count and Google's PageRank score, are significantly biased by paper field and age. We propose a general normalization procedure motivated by the $z$-score which produces much less biased rankings when applied to citation count and PageRank score.
1
1
0
1
0
0
Posterior Concentration for Bayesian Regression Trees and Forests
Since their inception in the 1980's, regression trees have been one of the more widely used non-parametric prediction methods. Tree-structured methods yield a histogram reconstruction of the regression surface, where the bins correspond to terminal nodes of recursive partitioning. Trees are powerful, yet susceptible to over-fitting. Strategies against overfitting have traditionally relied on pruning greedily grown trees. The Bayesian framework offers an alternative remedy against overfitting through priors. Roughly speaking, a good prior charges smaller trees where overfitting does not occur. While the consistency of random histograms, trees and their ensembles has been studied quite extensively, the theoretical understanding of the Bayesian counterparts has been missing. In this paper, we take a step towards understanding why/when do Bayesian trees and their ensembles not overfit. To address this question, we study the speed at which the posterior concentrates around the true smooth regression function. We propose a spike-and-tree variant of the popular Bayesian CART prior and establish new theoretical results showing that regression trees (and their ensembles) (a) are capable of recovering smooth regression surfaces, achieving optimal rates up to a log factor, (b) can adapt to the unknown level of smoothness and (c) can perform effective dimension reduction when p>n. These results provide a piece of missing theoretical evidence explaining why Bayesian trees (and additive variants thereof) have worked so well in practice.
0
0
1
1
0
0
Predicting Oral Disintegrating Tablet Formulations by Neural Network Techniques
Oral Disintegrating Tablets (ODTs) is a novel dosage form that can be dissolved on the tongue within 3min or less especially for geriatric and pediatric patients. Current ODT formulation studies usually rely on the personal experience of pharmaceutical experts and trial-and-error in the laboratory, which is inefficient and time-consuming. The aim of current research was to establish the prediction model of ODT formulations with direct compression process by Artificial Neural Network (ANN) and Deep Neural Network (DNN) techniques. 145 formulation data were extracted from Web of Science. All data sets were divided into three parts: training set (105 data), validation set (20) and testing set (20). ANN and DNN were compared for the prediction of the disintegrating time. The accuracy of the ANN model has reached 85.60%, 80.00% and 75.00% on the training set, validation set and testing set respectively, whereas that of the DNN model was 85.60%, 85.00% and 80.00%, respectively. Compared with the ANN, DNN showed the better prediction for ODT formulations. It is the first time that deep neural network with the improved dataset selection algorithm is applied to formulation prediction on small data. The proposed predictive approach could evaluate the critical parameters about quality control of formulation, and guide research and process development. The implementation of this prediction model could effectively reduce drug product development timeline and material usage, and proactively facilitate the development of a robust drug product.
0
0
0
1
0
0
HNCcorr: A Novel Combinatorial Approach for Cell Identification in Calcium-Imaging Movies
Calcium imaging has emerged as a workhorse method in neuroscience to investigate patterns of neuronal activity. Instrumentation to acquire calcium imaging movies has rapidly progressed and has become standard across labs. Still, algorithms to automatically detect and extract activity signals from calcium imaging movies are highly variable from~lab~to~lab and more advanced algorithms are continuously being developed. Here we present HNCcorr, a novel algorithm for cell identification in calcium imaging movies based on combinatorial optimization. The algorithm identifies cells by finding distinct groups of highly similar pixels in correlation space, where a pixel is represented by the vector of correlations to a set of other pixels. The HNCcorr algorithm achieves the best known results for the cell identification benchmark of Neurofinder, and guarantees an optimal solution to the underlying deterministic optimization model resulting in a transparent mapping from input data to outcome.
0
0
1
0
0
0
Intense cross-tail field-aligned currents in the plasma sheet at lunar distances
Field-aligned currents in the Earth's magnetotail are traditionally associated with transient plasma flows and strong plasma pressure gradients in the near-Earth side. In this paper we demonstrate a new field-aligned current system present at the lunar orbit tail. Using magnetotail current sheet observations by two ARTEMIS probes at $\sim60 R_E$, we analyze statistically the current sheet structure and current density distribution closest to the neutral sheet. For about half of our 130 current sheet crossings, the equatorial magnetic field component across-the tail (along the main, cross-tail current) contributes significantly to the vertical pressure balance. This magnetic field component peaks at the equator, near the cross-tail current maximum. For those cases, a significant part of the tail current, having an intensity in the range 1-10nA/m$^2$, flows along the magnetic field lines (it is both field-aligned and cross-tail). We suggest that this current system develops in order to compensate the thermal pressure by particles that on its own is insufficient to fend off the lobe magnetic pressure.
0
1
0
0
0
0
First non-icosahedral boron allotrope synthesized at high pressure and high temperature
Theoretical predictions of pressure-induced phase transformations often become long-standing enigmas because of limitations of contemporary available experimental possibilities. Hitherto the existence of a non-icosahedral boron allotrope has been one of them. Here we report on the first non-icosahedral boron allotrope, which we denoted as {\zeta}-B, with the orthorhombic {\alpha}-Ga-type structure (space group Cmce) synthesized in a diamond anvil cell at extreme high-pressure high-temperature conditions (115 GPa and 2100 K). The structure of {\zeta}-B was solved using single-crystal synchrotron X-ray diffraction and its compressional behavior was studied in the range of very high pressures (115 GPa to 135 GPa). Experimental validation of theoretical predictions reveals the degree of our up-to-date comprehension of condensed matter and promotes further development of the solid state physics and chemistry.
0
1
0
0
0
0