text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " We consider a theory of a two-component Dirac fermion localized on a (2+1)\ndimensional brane coupled to a (3+1) dimensional bulk. Using the fermionic\nparticle-vortex duality, we show that the theory has a strong-weak duality that\nmaps the coupling $e$ to $\\tilde e=(8\\pi)/e$. We explore the theory at\n$e^2=8\\pi$ where it is self-dual. The electrical conductivity of the theory is\na constant independent of frequency. When the system is at finite density and\nmagnetic field at filling factor $\\nu=\\frac12$, the longitudinal and Hall\nconductivity satisfies a semicircle law, and the ratio of the longitudinal and\nHall thermal electric coefficients is completely determined by the Hall angle.\nThe thermal Hall conductivity is directly related to the thermal electric\ncoefficients.\n",
"title": "Duality and Universal Transport in a Mixed-Dimension Electrodynamics"
}
| null | null | null | null | true | null |
1501
| null |
Default
| null | null |
null |
{
"abstract": " Pair Hidden Markov Models (PHMMs) are probabilistic models used for pairwise\nsequence alignment, a quintessential problem in bioinformatics. PHMMs include\nthree types of hidden states: match, insertion and deletion. Most previous\nstudies have used one or two hidden states for each PHMM state type. However,\nfew studies have examined the number of states suitable for representing\nsequence data or improving alignment accuracy.We developed a novel method to\nselect superior models (including the number of hidden states) for PHMM. Our\nmethod selects models with the highest posterior probability using Factorized\nInformation Criteria (FIC), which is widely utilised in model selection for\nprobabilistic models with hidden variables. Our simulations indicated this\nmethod has excellent model selection capabilities with slightly improved\nalignment accuracy. We applied our method to DNA datasets from 5 and 28\nspecies, ultimately selecting more complex models than those used in previous\nstudies.\n",
"title": "Beyond similarity assessment: Selecting the optimal model for sequence alignment via the Factorized Asymptotic Bayesian algorithm"
}
| null | null | null | null | true | null |
1502
| null |
Default
| null | null |
null |
{
"abstract": " This study focuses on the formation of two molecules of astrobiological\nimportance - glycolaldehyde (HC(O)CH2OH) and ethylene glycol (H2C(OH)CH2OH) -\nby surface hydrogenation of CO molecules. Our experiments aim at simulating the\nCO freeze-out stage in interstellar dark cloud regions, well before thermal and\nenergetic processing become dominant. It is shown that along with the formation\nof H2CO and CH3OH - two well established products of CO hydrogenation - also\nmolecules with more than one carbon atom form. The key step in this process is\nbelieved to be the recombination of two HCO radicals followed by the formation\nof a C-C bond. The experimentally established reaction pathways are implemented\ninto a continuous-time random-walk Monte Carlo model, previously used to model\nthe formation of CH3OH on astrochemical time-scales, to study their impact on\nthe solid-state abundances in dense interstellar clouds of glycolaldehyde and\nethylene glycol.\n",
"title": "Experimental evidence for Glycolaldehyde and Ethylene Glycol formation by surface hydrogenation of CO molecules under dense molecular cloud conditions"
}
| null | null |
[
"Physics"
] | null | true | null |
1503
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, prediction for linear systems with missing information is\ninvestigated. New methods are introduced to improve the Mean Squared Error\n(MSE) on the test set in comparison to state-of-the-art methods, through\nappropriate tuning of Bias-Variance trade-off. First, the use of proposed Soft\nWeighted Prediction (SWP) algorithm and its efficacy are depicted and compared\nto previous works for non-missing scenarios. The algorithm is then modified and\noptimized for missing scenarios. It is shown that controlled over-fitting by\nsuggested algorithms will improve prediction accuracy in various cases.\nSimulation results approve our heuristics in enhancing the prediction accuracy.\n",
"title": "New Methods of Enhancing Prediction Accuracy in Linear Models with Missing Data"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
1504
| null |
Validated
| null | null |
null |
{
"abstract": " We study ionic liquids composed 1-alkyl-3-methylimidazolium cations and\nbis(trifluoromethyl-sulfonyl)imide anions ([C$_n$MIm][NTf$_2$]) with varying\nchain-length $n\\!=\\!2, 4, 6, 8$ by using molecular dynamics simulations. We\nshow that a reparametrization of the dihedral potentials as well as charges of\nthe [NTf$_2$] anion leads to an improvment of the force field model introduced\nby Köddermann {\\em et al.} [ChemPhysChem, \\textbf{8}, 2464 (2007)] (KPL-force\nfield). A crucial advantage of the new parameter set is that the minimum energy\nconformations of the anion ({\\em trans} and {\\em gauche}), as deduced from {\\em\nab initio} calculations and {\\sc Raman} experiments, are now both well\nrepresented by our model. In addition, the results for [C$_n$MIm][NTf$_2$] show\nthat this modification leads to an even better agreement between experiment and\nmolecular dynamics simulation as demonstrated for densities, diffusion\ncoefficients, vaporization enthalpies, reorientational correlation times, and\nviscosities. Even though we focused on a better representation of the anion\nconformation, also the alkyl chain-length dependence of the cation behaves\ncloser to the experiment. We strongly encourage to use the new NGKPL force\nfield for the [NTf$_2$] anion instead of the earlier KPL parameter set for\ncomputer simulations aiming to describe the thermodynamics, dynamics and also\nstructure of imidazolium based ionic liquids.\n",
"title": "Revisiting Imidazolium Based Ionic Liquids: Effect of the Conformation Bias of the [NTf$_{2}$] Anion Studied By Molecular Dynamics Simulations"
}
| null | null | null | null | true | null |
1505
| null |
Default
| null | null |
null |
{
"abstract": " Tick is a statistical learning library for Python~3, with a particular\nemphasis on time-dependent models, such as point processes, and tools for\ngeneralized linear models and survival analysis. The core of the library is an\noptimization module providing model computational classes, solvers and proximal\noperators for regularization. tick relies on a C++ implementation and\nstate-of-the-art optimization algorithms to provide very fast computations in a\nsingle node multi-core setting. Source code and documentation can be downloaded\nfrom this https URL\n",
"title": "Tick: a Python library for statistical learning, with a particular emphasis on time-dependent modelling"
}
| null | null | null | null | true | null |
1506
| null |
Default
| null | null |
null |
{
"abstract": " We present a well-posedness and stability result for a class of nondegenerate\nlinear parabolic equations driven by rough paths. More precisely, we introduce\na notion of weak solution that satisfies an intrinsic formulation of the\nequation in a suitable Sobolev space of negative order. Weak solutions are then\nshown to satisfy the corresponding en- ergy estimates which are deduced\ndirectly from the equation. Existence is obtained by showing compactness of a\nsuitable sequence of approximate solutions whereas unique- ness relies on a\ndoubling of variables argument and a careful analysis of the passage to the\ndiagonal. Our result is optimal in the sense that the assumptions on the\ndeterministic part of the equation as well as the initial condition are the\nsame as in the classical PDEs theory.\n",
"title": "An energy method for rough partial differential equations"
}
| null | null | null | null | true | null |
1507
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider the Graphical Lasso (GL), a popular optimization\nproblem for learning the sparse representations of high-dimensional datasets,\nwhich is well-known to be computationally expensive for large-scale problems.\nRecently, we have shown that the sparsity pattern of the optimal solution of GL\nis equivalent to the one obtained from simply thresholding the sample\ncovariance matrix, for sparse graphs under different conditions. We have also\nderived a closed-form solution that is optimal when the thresholded sample\ncovariance matrix has an acyclic structure. As a major generalization of the\nprevious result, in this paper we derive a closed-form solution for the GL for\ngraphs with chordal structures. We show that the GL and thresholding\nequivalence conditions can significantly be simplified and are expected to hold\nfor high-dimensional problems if the thresholded sample covariance matrix has a\nchordal structure. We then show that the GL and thresholding equivalence is\nenough to reduce the GL to a maximum determinant matrix completion problem and\ndrive a recursive closed-form solution for the GL when the thresholded sample\ncovariance matrix has a chordal structure. For large-scale problems with up to\n450 million variables, the proposed method can solve the GL problem in less\nthan 2 minutes, while the state-of-the-art methods converge in more than 2\nhours.\n",
"title": "Sparse Inverse Covariance Estimation for Chordal Structures"
}
| null | null | null | null | true | null |
1508
| null |
Default
| null | null |
null |
{
"abstract": " We prove that the orthogonal free quantum group factors\n$\\mathcal{L}(\\mathbb{F}O_N)$ are strongly $1$-bounded in the sense of Jung. In\nparticular, they are not isomorphic to free group factors. This result is\nobtained by establishing a spectral regularity result for the edge reversing\noperator on the quantum Cayley tree associated to $\\mathbb{F}O_N$, and\ncombining this result with a recent free entropy dimension rank theorem of Jung\nand Shlyakhtenko.\n",
"title": "Orthogonal free quantum group factors are strongly 1-bounded"
}
| null | null | null | null | true | null |
1509
| null |
Default
| null | null |
null |
{
"abstract": " We studied the temperature dependence of the diagonal double-stripe spin\norder in one and two unit cell thick layers of FeTe grown on the topological\ninsulator Bi_2Te_3 via spin-polarized scanning tunneling microscopy. The spin\norder persists up to temperatures which are higher than the transition\ntemperature reported for bulk Fe_1+yTe with lowest possible excess Fe content\ny. The enhanced spin order stability is assigned to a strongly decreased y with\nrespect to the lowest values achievable in bulk crystal growth, and effects due\nto the interface between the FeTe and the topological insulator. The result is\nrelevant for understanding the recent observation of a coexistence of\nsuperconducting correlations and spin order in this system.\n",
"title": "Enhanced spin ordering temperature in ultrathin FeTe films grown on a topological insulator"
}
| null | null |
[
"Physics"
] | null | true | null |
1510
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we will use the interior functions of an hierarchical basis\nfor high order $BDM_p$ elements to enforce the divergence-free condition of a\nmagnetic field $B$ approximated by the H(div) $BDM_p$ basis. The resulting\nconstrained finite element method can be used to solve magnetic induction\nequation in MHD equations. The proposed procedure is based on the fact that the\nscalar $(p-1)$-th order polynomial space on each element can be decomposed as\nan orthogonal sum of the subspace defined by the divergence of the interior\nfunctions of the $p$-th order $BDM_p$ basis and the constant function.\nTherefore, the interior functions can be used to remove element-wise all higher\norder terms except the constant in the divergence error of the finite element\nsolution of $B$-field. The constant terms from each element can be then easily\ncorrected using a first order H(div) basis globally. Numerical results for a\n3-D magnetic induction equation show the effectiveness of the proposed method\nin enforcing divergence-free condition of the magnetic field.\n",
"title": "High Order Hierarchical Divergence-free Constrained Transport $H(div)$ Finite Element Method for Magnetic Induction Equation"
}
| null | null | null | null | true | null |
1511
| null |
Default
| null | null |
null |
{
"abstract": " Imagine that a malicious hacker is trying to attack a server over the\nInternet and the server wants to block the attack packets as close to their\npoint of origin as possible. However, the security gateway ahead of the source\nof attack is untrusted. How can the server block the attack packets through\nthis gateway? In this paper, we introduce REMOTEGATE, a trustworthy mechanism\nfor allowing any party (server) on the Internet to configure a security gateway\nowned by a second party, at a certain agreed upon reward that the former pays\nto the latter for its service. We take an interactive incentive-compatible\napproach, for the case when both the server and the gateway are rational, to\ndevise a protocol that will allow the server to help the security gateway\ngenerate and deploy a policy rule that filters the attack packets before they\nreach the server. The server will reward the gateway only when the latter can\nsuccessfully verify that it has generated and deployed the correct rule for the\nissue. This mechanism will enable an Internet-scale approach to improving\nsecurity and privacy, backed by digital payment incentives.\n",
"title": "REMOTEGATE: Incentive-Compatible Remote Configuration of Security Gateways"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1512
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the global consensus problem for multi-agent systems with input\nsaturation over digraphs. Under a mild connectivity condition that the\nunderlying digraph has a directed spanning tree, we use Lyapunov methods to\nshow that the widely used distributed consensus protocol, which solves the\nconsensus problem for the case without input saturation constraints, also\nsolves the global consensus problem for the case with input saturation\nconstraints. In order to reduce the overall need of communication and system\nupdates, we then propose a distributed event-triggered control law. Global\nconsensus is still realized and Zeno behavior is excluded. Numerical\nsimulations are provided to illustrate the effectiveness of the theoretical\nresults.\n",
"title": "Distributed Event-Triggered Control for Global Consensus of Multi-Agent Systems with Input Saturation"
}
| null | null | null | null | true | null |
1513
| null |
Default
| null | null |
null |
{
"abstract": " Let $H \\subseteq K$ be two subgroups of a finite group $G$ and Aut$(K)$ the\nautomorphism group of $K$. The autocommuting probability of $G$ relative to its\nsubgroups $H$ and $K$, denoted by ${\\rm Pr}(H, {\\rm Aut}(K))$, is the\nprobability that the autocommutator of a randomly chosen pair of elements, one\nfrom $H$ and the other from Aut$(K)$, is equal to the identity element of $G$.\nIn this paper, we study ${\\rm Pr}(H, {\\rm Aut}(K))$ through a generalization.\n",
"title": "Autocommuting probability of a finite group relative to its subgroups"
}
| null | null | null | null | true | null |
1514
| null |
Default
| null | null |
null |
{
"abstract": " This work proposes the variable exponent Lebesgue modular as a replacement\nfor the 1-norm in total variation (TV) regularization. It allows the exponent\nto vary with spatial location and thus enables users to locally select whether\nto preserve edges or smooth intensity variations. In contrast to earlier work\nusing TV-like methods with variable exponents, the exponent function is here\ncomputed offline as a fixed parameter of the final optimization problem,\nresulting in a convex goal functional. The obtained formulas for the convex\nconjugate and the proximal operators are simple in structure and can be\nevaluated very efficiently, an important property for practical usability.\nNumerical results with variable $L^p$ TV prior in denoising and tomography\nproblems on synthetic data compare favorably to total generalized variation\n(TGV) and TV.\n",
"title": "Total variation regularization with variable Lebesgue prior"
}
| null | null | null | null | true | null |
1515
| null |
Default
| null | null |
null |
{
"abstract": " We present radio observations at 1.5 GHz of 32 local objects selected to\nreproduce the physical properties of $z\\sim5$ star-forming galaxies. We also\nreport non-detections of five such sources in the sub-millimetre. We find a\nradio-derived star formation rate which is typically half that derived from\nH$\\alpha$ emission for the same objects. These observations support previous\nindications that we are observing galaxies with a young dominant stellar\npopulation, which has not yet established a strong supernova-driven synchrotron\ncontinuum. We stress caution when applying star formation rate calibrations to\nstellar populations younger than 100 Myr. We calibrate the conversions for\nyounger galaxies, which are dominated by a thermal radio emission component. We\nimprove the size constraints for these sources, compared to previous unresolved\nground-based optical observations. Their physical size limits indicate very\nhigh star formation rate surface densities, several orders of magnitude higher\nthan the local galaxy population. In typical nearby galaxies, this would imply\nthe presence of galaxy-wide winds. Given the young stellar populations, it is\nunclear whether a mechanism exists in our sources that can deposit sufficient\nkinetic energy into the interstellar medium to drive such outflows.\n",
"title": "Radio observations confirm young stellar populations in local analogues to $z\\sim5$ Lyman break galaxies"
}
| null | null | null | null | true | null |
1516
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks (DNNs) have emerged as key enablers of machine learning.\nApplying larger DNNs to more diverse applications is an important challenge.\nThe computations performed during DNN training and inference are dominated by\noperations on the weight matrices describing the DNN. As DNNs incorporate more\nlayers and more neurons per layers, these weight matrices may be required to be\nsparse because of memory limitations. Sparse DNNs are one possible approach,\nbut the underlying theory is in the early stages of development and presents a\nnumber of challenges, including determining the accuracy of inference and\nselecting nonzero weights for training. Associative array algebra has been\ndeveloped by the big data community to combine and extend database, matrix, and\ngraph/network concepts for use in large, sparse data problems. Applying this\nmathematics to DNNs simplifies the formulation of DNN mathematics and reveals\nthat DNNs are linear over oscillating semirings. This work uses associative\narray DNNs to construct exact solutions and corresponding perturbation models\nto the rectified linear unit (ReLU) DNN equations that can be used to construct\ntest vectors for sparse DNN implementations over various precisions. These\nsolutions can be used for DNN verification, theoretical explorations of DNN\nproperties, and a starting point for the challenge of sparse training.\n",
"title": "Sparse Deep Neural Network Exact Solutions"
}
| null | null | null | null | true | null |
1517
| null |
Default
| null | null |
null |
{
"abstract": " In 1998, R. Gompf defined a homotopy invariant $\\theta_G$ of oriented 2-plane\nfields in 3-manifolds. This invariant is defined for oriented 2-plane fields\n$\\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\\xi)$\nis a torsion element of $H^2(M;\\mathbb{Z})$. In this article, we define an\nextension of the Gompf invariant for all compact oriented 3-manifolds with\nboundary and we study its iterated variations under Lagrangian-preserving\nsurgeries. It follows that the extended Gompf invariant is a degree two\ninvariant with respect to a suitable finite type invariant theory.\n",
"title": "Variation formulas for an extended Gompf invariant"
}
| null | null | null | null | true | null |
1518
| null |
Default
| null | null |
null |
{
"abstract": " We enumerate all circulant good matrices with odd orders divisible by 3 up to\norder 70. As a consequence of this we find a previously overlooked set of good\nmatrices of order 27 and a new set of good matrices of order 57. We also find\nthat circulant good matrices do not exist in the orders 51, 63, and 69, thereby\nfinding three new counterexamples to the conjecture that such matrices exist in\nall odd orders. Additionally, we prove a new relationship between the entries\nof good matrices and exploit this relationship in our enumeration algorithm.\nOur method applies the SAT+CAS paradigm of combining computer algebra\nfunctionality with modern SAT solvers to efficiently search large spaces which\nare specified by both algebraic and logical constraints.\n",
"title": "A SAT+CAS Approach to Finding Good Matrices: New Examples and Counterexamples"
}
| null | null | null | null | true | null |
1519
| null |
Default
| null | null |
null |
{
"abstract": " Spectral mapping uses a deep neural network (DNN) to map directly from noisy\nspeech to clean speech. Our previous study found that the performance of\nspectral mapping improves greatly when using helpful cues from an acoustic\nmodel trained on clean speech. The mapper network learns to mimic the input\nfavored by the spectral classifier and cleans the features accordingly. In this\nstudy, we explore two new innovations: we replace a DNN-based spectral mapper\nwith a residual network that is more attuned to the goal of predicting clean\nspeech. We also examine how integrating long term context in the mimic\ncriterion (via wide-residual biLSTM networks) affects the performance of\nspectral mapping compared to DNNs. Our goal is to derive a model that can be\nused as a preprocessor for any recognition system; the features derived from\nour model are passed through the standard Kaldi ASR pipeline and achieve a WER\nof 9.3%, which is the lowest recorded word error rate for CHiME-2 dataset using\nonly feature adaptation.\n",
"title": "An Exploration of Mimic Architectures for Residual Network Based Spectral Mapping"
}
| null | null | null | null | true | null |
1520
| null |
Default
| null | null |
null |
{
"abstract": " Gravitational wave astronomy has set in motion a scientific revolution. To\nfurther enhance the science reach of this emergent field, there is a pressing\nneed to increase the depth and speed of the gravitational wave algorithms that\nhave enabled these groundbreaking discoveries. To contribute to this effort, we\nintroduce Deep Filtering, a new highly scalable method for end-to-end\ntime-series signal processing, based on a system of two deep convolutional\nneural networks, which we designed for classification and regression to rapidly\ndetect and estimate parameters of signals in highly noisy time-series data\nstreams. We demonstrate a novel training scheme with gradually increasing noise\nlevels, and a transfer learning procedure between the two networks. We showcase\nthe application of this method for the detection and parameter estimation of\ngravitational waves from binary black hole mergers. Our results indicate that\nDeep Filtering significantly outperforms conventional machine learning\ntechniques, achieves similar performance compared to matched-filtering while\nbeing several orders of magnitude faster thus allowing real-time processing of\nraw big data with minimal resources. More importantly, Deep Filtering extends\nthe range of gravitational wave signals that can be detected with ground-based\ngravitational wave detectors. This framework leverages recent advances in\nartificial intelligence algorithms and emerging hardware architectures, such as\ndeep-learning-optimized GPUs, to facilitate real-time searches of gravitational\nwave sources and their electromagnetic and astro-particle counterparts.\n",
"title": "Deep Neural Networks to Enable Real-time Multimessenger Astrophysics"
}
| null | null | null | null | true | null |
1521
| null |
Default
| null | null |
null |
{
"abstract": " We present a stochastic CA modelling approach of corrosion based on spatially\nseparated electrochemical half-reactions, diffusion, acido-basic neutralization\nin solution and passive properties of the oxide layers. Starting from different\ninitial conditions, a single framework allows one to describe generalised\ncorrosion, localised corrosion, reactive and passive surfaces, including\noccluded corrosion phenomena as well. Spontaneous spatial separation of anodic\nand cathodic zones is associated with bare metal and passivated metal on the\nsurface. This separation is also related to local acidification of the\nsolution. This spontaneous change is associated with a much faster corrosion\nrate. Material morphology is closely related to corrosion kinetics, which can\nbe used for technological applications.\n",
"title": "Contribution of cellular automata to the understanding of corrosion phenomena"
}
| null | null |
[
"Physics"
] | null | true | null |
1522
| null |
Validated
| null | null |
null |
{
"abstract": " We give a bordered extension of involutive HF-hat and use it to give an\nalgorithm to compute involutive HF-hat for general 3-manifolds. We also explain\nhow the mapping class group action on HF-hat can be computed using bordered\nFloer homology. As applications, we prove that involutive HF-hat satisfies a\nsurgery exact triangle and compute HFI-hat of the branched double covers of all\n10-crossing knots.\n",
"title": "Involutive bordered Floer homology"
}
| null | null | null | null | true | null |
1523
| null |
Default
| null | null |
null |
{
"abstract": " This comprehensive study of comet C/1995 O1 focuses first on investigating\nits orbital motion over a period of 17.6 yr (1993-2010). The comet is suggested\nto have approached Jupiter to 0.005 AU on -2251 November 7, in general\nconformity with Marsden's (1999) proposal of a Jovian encounter nearly 4300 yr\nago. The variations of sizable nongravitational effects with heliocentric\ndistance correlate with the evolution of outgassing, asymmetric relative to\nperihelion. The future orbital period will shorten to ~1000 yr because of\norbital-cascade resonance effects. We find that the sublimation curves of\nparent molecules are fitted with the type of a law used for the\nnongravitational acceleration, determine their orbit-integrated mass loss, and\nconclude that the share of water ice was at most 57%, and possibly less than\n50%, of the total outgassed mass. Even though organic parent molecules (many\nstill unidentified) had very low abundances relative to water individually,\ntheir high molar mass and sheer number made them, summarily, important\npotential mass contributors to the total production of gas. The mass loss of\ndust per orbit exceeded that of water ice by a factor of ~12, a dust loading\nhigh enough to imply a major role for heavy organic molecules of low volatility\nin accelerating the minuscule dust particles in the expanding halos to terminal\nvelocities as high as 0.7 km s^{-1}. In Part II, the comet's nucleus will be\nmodeled as a compact cluster of massive fragments to conform to the integrated\nnongravitational effect.\n",
"title": "Orbital Evolution, Activity, and Mass Loss of Comet C/1995 O1 (Hale-Bopp). I. Close Encounter with Jupiter in Third Millennium BCE and Effects of Outgassing on the Comet's Motion and Physical Properties"
}
| null | null | null | null | true | null |
1524
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we investigate property testing whether or not a degree d\nmultivariate poly- nomial is a sum of squares or is far from a sum of squares.\nWe show that if we require that the property tester always accepts YES\ninstances and uses random samples, $n^{\\Omega(d)}$ samples are required, which\nis not much fewer than it would take to completely determine the polynomial. To\nprove this lower bound, we show that with high probability, multivariate\npolynomial in- terpolation matches arbitrary values on random points and the\nresulting polynomial has small norm. We then consider a particular polynomial\nwhich is non-negative yet not a sum of squares and use pseudo-expectation\nvalues to prove it is far from being a sum of squares.\n",
"title": "A Note on Property Testing Sum of Squares and Multivariate Polynomial Interpolation"
}
| null | null | null | null | true | null |
1525
| null |
Default
| null | null |
null |
{
"abstract": " The Cauchy-Rayleigh (CR) distribution has been successfully used to describe\nasymmetric and heavy-tail events from radar imagery. Employing such model to\ndescribe lifetime data may then seem attractive, but some drawbacks arise: its\nprobability density function does not cover non-modal behavior as well as the\nCR hazard rate function (hrf) assumes only one form. To outperform this\ndifficulty, we introduce an extended CR model, called exponentiated\nCauchy-Rayleigh (ECR) distribution. This model has two parameters and hrf with\ndecreasing, decreasing-increasing-decreasing and upside-down bathtub forms. In\nthis paper, several closed-form mathematical expressions for the ECR model are\nproposed: median, mode, probability weighted, log-, incomplete and order\nstatistic moments and Fisher information matrix. We propose three estimation\nprocedures for the ECR parameters: maximum likelihood (ML), bias corrected ML\nand percentile-based methods. A simulation study is done to assess the\nperformance of estimators. An application to survival time of heart problem\npatients illustrates the usefulness of the ECR model. Results point out that\nthe ECR distribution may outperform classical lifetime models, such as the\ngamma, Birnbaun-Saunders, Weibull and log-normal laws, before heavy-tail data.\n",
"title": "Closed-form mathematical expressions for the exponentiated Cauchy-Rayleigh distribution"
}
| null | null | null | null | true | null |
1526
| null |
Default
| null | null |
null |
{
"abstract": " In Paris Basin, we evaluate how HTEM data complement the usual borehole,\ngeological and deep seismic data used for modelling aquifer geometries. With\nthese traditional data, depths between ca. 50 to 300m are often relatively\nill-constrained, as most boreholes lie within the first tens of meters of the\nunderground and petroleum seismic is blind shallower than ca. 300m. We have\nfully reprocessed and re-inverted 540km of flight lines of a SkyTEM survey of\n2009, acquired on a 40x12km zone with 400m line spacing. The resistivity model\nis first \"calibrated\" with respect to ca. 50 boreholes available on the study\narea. Overall, the correlation between EM resistivity models and the\nhydrogeological horizons clearly shows that the geological units in which the\naquifers are developed almost systematically correspond to relative increase of\nresistivity, whatever the \"background\" resistivity environment and the\nlithology of the aquifer. In 3D Geomodeller software, this allows interpreting\n11 aquifer/aquitar layers along the flight lines and then jointly interpolating\nthem in 3D along with the borehole data. The resulting model displays 3D\naquifer geometries consistent with the SIGES \"reference\" regional\nhydrogeological model and improves it in between the boreholes and on the\n50-300m depth range.\n",
"title": "HTEM data improve 3D modelling of aquifers in Paris Basin, France"
}
| null | null |
[
"Physics"
] | null | true | null |
1527
| null |
Validated
| null | null |
null |
{
"abstract": " The methods to access large relational databases in a distributed system are\nwell established: the relational query language SQL often serves as a language\nfor data access and manipulation, and in addition public interfaces are exposed\nusing communication protocols like REST. Similarly to REST, GraphQL is the\nquery protocol of an application layer developed by Facebook. It provides a\nunified interface between the client and the server for data fetching and\nmanipulation. Using GraphQL's type system, it is possible to specify data\nhandling of various sources and to combine, e.g., relational with NoSQL\ndatabases. In contrast to REST, GraphQL provides a single API endpoint and\nsupports flexible queries over linked data.\nGraphQL can also be used as an interface for deductive databases. In this\npaper, we give an introduction of GraphQL and a comparison to REST. Using\nlanguage features recently added to SWI-Prolog 7, we have developed the Prolog\nlibrary GraphQL.pl, which implements the GraphQL type system and query syntax\nas a domain-specific language with the help of definite clause grammars (DCG),\nquasi quotations, and dicts. Using our library, the type system created for a\ndeductive database can be validated, while the query system provides a unified\ninterface for data access and introspection.\n",
"title": "Implementing GraphQL as a Query Language for Deductive Databases in SWI-Prolog Using DCGs, Quasi Quotations, and Dicts"
}
| null | null | null | null | true | null |
1528
| null |
Default
| null | null |
null |
{
"abstract": " This paper proposes a novel adaptive algorithm for the automated short-term\ntrading of financial instrument. The algorithm adopts a semantic sentiment\nanalysis technique to inspect the Twitter posts and to use them to predict the\nbehaviour of the stock market. Indeed, the algorithm is specifically developed\nto take advantage of both the sentiment and the past values of a certain\nfinancial instrument in order to choose the best investment decision. This\nallows the algorithm to ensure the maximization of the obtainable profits by\ntrading on the stock market. We have conducted an investment simulation and\ncompared the performance of our proposed with a well-known benchmark (DJTATO\nindex) and the optimal results, in which an investor knows in advance the\nfuture price of a product. The result shows that our approach outperforms the\nbenchmark and achieves the performance score close to the optimal result.\n",
"title": "Social Network based Short-Term Stock Trading System"
}
| null | null | null | null | true | null |
1529
| null |
Default
| null | null |
null |
{
"abstract": " The multi-indexed orthogonal polynomials (the Meixner, little $q$-Jacobi\n(Laguerre), ($q$-)Racah, Wilson, Askey-Wilson types) satisfying second order\ndifference equations were constructed in discrete quantum mechanics. They are\npolynomials in the sinusoidal coordinates $\\eta(x)$ ($x$ is the coordinate of\nquantum system) and expressed in terms of the Casorati determinants whose\nmatrix elements are functions of $x$ at various points. By using shape\ninvariance properties, we derive various equivalent determinant expressions,\nespecially those whose matrix elements are functions of the same point $x$.\nExcept for the ($q$-)Racah case, they can be expressed in terms of $\\eta$ only,\nwithout explicit $x$-dependence.\n",
"title": "New Determinant Expressions of the Multi-indexed Orthogonal Polynomials in Discrete Quantum Mechanics"
}
| null | null |
[
"Physics",
"Mathematics"
] | null | true | null |
1530
| null |
Validated
| null | null |
null |
{
"abstract": " We describe an approach to understand the peculiar and counterintuitive\ngeneralization properties of deep neural networks. The approach involves going\nbeyond worst-case theoretical capacity control frameworks that have been\npopular in machine learning in recent years to revisit old ideas in the\nstatistical mechanics of neural networks. Within this approach, we present a\nprototypical Very Simple Deep Learning (VSDL) model, whose behavior is\ncontrolled by two control parameters, one describing an effective amount of\ndata, or load, on the network (that decreases when noise is added to the\ninput), and one with an effective temperature interpretation (that increases\nwhen algorithms are early stopped). Using this model, we describe how a very\nsimple application of ideas from the statistical mechanics theory of\ngeneralization provides a strong qualitative description of recently-observed\nempirical results regarding the inability of deep neural networks not to\noverfit training data, discontinuous learning and sharp transitions in the\ngeneralization properties of learning algorithms, etc.\n",
"title": "Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior"
}
| null | null | null | null | true | null |
1531
| null |
Default
| null | null |
null |
{
"abstract": " Location-based augmented reality games have entered the mainstream with the\nnearly overnight success of Niantic's Pokémon Go. Unlike traditional video\ngames, the fact that players of such games carry out actions in the external,\nphysical world to accomplish in-game objectives means that the large-scale\nadoption of such games motivate people, en masse, to do things and go places\nthey would not have otherwise done in unprecedented ways. The social\nimplications of such mass-mobilisation of individual players are, in general,\ndifficult to anticipate or characterise, even for the short-term. In this work,\nwe focus on disaster relief, and the short- and long-term implications that a\nproliferation of AR games like Pokémon Go, may have in disaster-prone regions\nof the world. We take a distributed cognition approach and focus on one natural\ndisaster-prone region of New Zealand, the city of Wellington.\n",
"title": "Towards an Understanding of the Effects of Augmented Reality Games on Disaster Management"
}
| null | null | null | null | true | null |
1532
| null |
Default
| null | null |
null |
{
"abstract": " Most interesting proofs in mathematics contain an inductive argument which\nrequires an extension of the LK-calculus to formalize. The most commonly used\ncalculi for induction contain a separate rule or axiom which reduces the valid\nproof theoretic properties of the calculus. To the best of our knowledge, there\nare no such calculi which allow cut-elimination to a normal form with the\nsubformula property, i.e. every formula occurring in the proof is a subformula\nof the end sequent. Proof schemata are a variant of LK-proofs able to simulate\ninduction by linking proofs together. There exists a schematic normal form\nwhich has comparable proof theoretic behaviour to normal forms with the\nsubformula property. However, a calculus for the construction of proof schemata\ndoes not exist. In this paper, we introduce a calculus for proof schemata and\nprove soundness and completeness with respect to a fragment of the inductive\narguments formalizable in Peano arithmetic.\n",
"title": "Integrating a Global Induction Mechanism into a Sequent Calculus"
}
| null | null | null | null | true | null |
1533
| null |
Default
| null | null |
null |
{
"abstract": " A near pristine atomic cooling halo close to a star forming galaxy offers a\nnatural pathway for forming massive direct collapse black hole (DCBH) seeds\nwhich could be the progenitors of the $z>6$ redshift quasars. The close\nproximity of the haloes enables a sufficient Lyman-Werner flux to effectively\ndissociate H$_2$ in the core of the atomic cooling halo. A mild background may\nalso be required to delay star formation in the atomic cooling halo, often\nattributed to distant background galaxies. In this letter we investigate the\nimpact of metal enrichment from both the background galaxies and the close star\nforming galaxy under extremely unfavourable conditions such as instantaneous\nmetal mixing. We find that within the time window of DCBH formation, the level\nof enrichment never exceeds the critical threshold (Z$_{cr} \\sim 1 \\times\n10^{-5} \\ \\rm Z_{\\odot})$, and attains a maximum metallicity of Z $\\sim 2\n\\times 10^{-6} \\ \\rm Z_{\\odot}$. As the system evolves, the metallicity\neventually exceeds the critical threshold, long after the DCBH has formed.\n",
"title": "An analytic resolution to the competition between Lyman-Werner radiation and metal winds in direct collapse black hole hosts"
}
| null | null |
[
"Physics"
] | null | true | null |
1534
| null |
Validated
| null | null |
null |
{
"abstract": " Graphs are commonly used to encode relationships among entities, yet, their\nabstractness makes them incredibly difficult to analyze. Node-link diagrams are\na popular method for drawing graphs. Classical techniques for the node-link\ndiagrams include various layout methods that rely on derived information to\nposition points, which often lack interactive exploration functionalities; and\nforce-directed layouts, which ignore global structures of the graph. This paper\naddresses the graph drawing challenge by leveraging topological features of a\ngraph as derived information for interactive graph drawing. We first discuss\nextracting topological features from a graph using persistent homology. We then\nintroduce an interactive persistence barcodes to study the substructures of a\nforce-directed graph layout; in particular, we add contracting and repulsing\nforces guided by the 0-dimensional persistent homology features. Finally, we\ndemonstrate the utility of our approach across three datasets.\n",
"title": "Driving Interactive Graph Exploration Using 0-Dimensional Persistent Homology Features"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1535
| null |
Validated
| null | null |
null |
{
"abstract": " Due to economic globalization, each country's economic law, including tax\nlaws and tax treaties, has been forced to work as a single network. However,\neach jurisdiction (country or region) has not made its economic law under the\nassumption that its law functions as an element of one network, so it has\nbrought unexpected results. We thought that the results are exactly\ninternational tax avoidance. To contribute to the solution of international tax\navoidance, we tried to investigate which part of the network is vulnerable.\nSpecifically, focusing on treaty shopping, which is one of international tax\navoidance methods, we attempt to identified which jurisdiction are likely to be\nused for treaty shopping from tax liabilities and the relationship between\njurisdictions which are likely to be used for treaty shopping and others. For\nthat purpose, based on withholding tax rates imposed on dividends, interest,\nand royalties by jurisdictions, we produced weighted multiple directed graphs,\ncomputed the centralities and detected the communities. As a result, we\nclarified the jurisdictions that are likely to be used for treaty shopping and\npointed out that there are community structures. The results of this study\nsuggested that fewer jurisdictions need to introduce more regulations for\nprevention of treaty abuse worldwide.\n",
"title": "Identification of Conduit Countries and Community Structures in the Withholding Tax Networks"
}
| null | null | null | null | true | null |
1536
| null |
Default
| null | null |
null |
{
"abstract": " In this work we compare different batch construction methods for mini-batch\ntraining of recurrent neural networks. While popular implementations like\nTensorFlow and MXNet suggest a bucketing approach to improve the\nparallelization capabilities of the recurrent training process, we propose a\nsimple ordering strategy that arranges the training sequences in a stochastic\nalternatingly sorted way. We compare our method to sequence bucketing as well\nas various other batch construction strategies on the CHiME-4 noisy speech\nrecognition corpus. The experiments show that our alternated sorting approach\nis able to compete both in training time and recognition performance while\nbeing conceptually simpler to implement.\n",
"title": "A comprehensive study of batch construction strategies for recurrent neural networks in MXNet"
}
| null | null | null | null | true | null |
1537
| null |
Default
| null | null |
null |
{
"abstract": " In the Drury-Arveson space, we consider the subspace of functions whose\nTaylor coefficients are supported in the complement of a set\n$Y\\subset\\mathbb{N}^d$ with the property that $Y+e_j\\subset Y$ for all\n$j=1,\\dots,d$. This is an easy example of shift-invariant subspace, which can\nbe considered as a RKHS in is own right, with a kernel that can be explicitely\ncalculated. Moreover, every such a space can be seen as an intersection of\nkernels of Hankel operators, whose symbols can be explicity calcuated as well.\nFinally, this is the right space on which Drury's inequality can be optimally\nadapted to a sub-family of the commuting and contractive operators originally\nconsidered by Drury.\n",
"title": "On a class of shift-invariant subspaces of the Drury-Arveson space"
}
| null | null | null | null | true | null |
1538
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we present the results of a $\\sim$5 hour airborne gamma-ray\nsurvey carried out over the Tyrrhenian sea in which the height range (77-3066)\nm has been investigated. Gamma-ray spectroscopy measurements have been\nperformed by using the AGRS_16L detector, a module of four 4L NaI(Tl) crystals.\nThe experimental setup was mounted on the Radgyro, a prototype aircraft\ndesigned for multisensorial acquisitions in the field of proximal remote\nsensing. By acquiring high-statistics spectra over the sea (i.e. in the absence\nof signals having geological origin) and by spanning a wide spectrum of\naltitudes it has been possible to split the measured count rate into a constant\naircraft component and a cosmic component exponentially increasing with\nincreasing height. The monitoring of the count rate having pure cosmic origin\nin the >3 MeV energy region allowed to infer the background count rates in the\n$^{40}$K, $^{214}$Bi and $^{208}$Tl photopeaks, which need to be subtracted in\nprocessing airborne gamma-ray data in order to estimate the potassium, uranium\nand thorium abundances in the ground. Moreover, a calibration procedure has\nbeen carried out by implementing the CARI-6P and EXPACS dosimetry tools,\naccording to which the annual cosmic effective dose to human population has\nbeen linearly related to the measured cosmic count rates.\n",
"title": "Airborne gamma-ray spectroscopy for modeling cosmic radiation and effective dose in the lower atmosphere"
}
| null | null | null | null | true | null |
1539
| null |
Default
| null | null |
null |
{
"abstract": " A new search strategy for the detection of the elusive dark matter (DM) axion\nis proposed. The idea is based on streaming DM axions, whose flux might get\ntemporally enormously enhanced due to gravitational lensing. This can happen if\nthe Sun or some planet (including the Moon) is found along the direction of a\nDM stream propagating towards the Earth location. The experimental requirements\nto the axion haloscope are a wide-band performance combined with a fast axion\nrest mass scanning mode, which are feasible. Once both conditions have been\nimplemented in a haloscope, the axion search can continue parasitically almost\nas before. Interestingly, some new DM axion detectors are operating wide-band\nby default. In order not to miss the actually unpredictable timing of a\npotential short duration signal, a network of co-ordinated axion antennae is\nrequired, preferentially distributed world-wide. The reasoning presented here\nfor the axions applies to some degree also to any other DM candidates like the\nWIMPs.\n",
"title": "Search for axions in streaming dark matter"
}
| null | null | null | null | true | null |
1540
| null |
Default
| null | null |
null |
{
"abstract": " We present an algorithm that computes the product of two n-bit integers in\nO(n log n (4\\sqrt 2)^{log^* n}) bit operations. Previously, the best known\nbound was O(n log n 6^{log^* n}). We also prove that for a fixed prime p,\npolynomials in F_p[X] of degree n may be multiplied in O(n log n 4^{log^* n})\nbit operations; the previous best bound was O(n log n 8^{log^* n}).\n",
"title": "Faster integer and polynomial multiplication using cyclotomic coefficient rings"
}
| null | null | null | null | true | null |
1541
| null |
Default
| null | null |
null |
{
"abstract": " We study the stochastic multi-armed bandit (MAB) problem in the presence of\nside-observations across actions that occur as a result of an underlying\nnetwork structure. In our model, a bipartite graph captures the relationship\nbetween actions and a common set of unknowns such that choosing an action\nreveals observations for the unknowns that it is connected to. This models a\ncommon scenario in online social networks where users respond to their friends'\nactivity, thus providing side information about each other's preferences. Our\ncontributions are as follows: 1) We derive an asymptotic lower bound (with\nrespect to time) as a function of the bi-partite network structure on the\nregret of any uniformly good policy that achieves the maximum long-term average\nreward. 2) We propose two policies - a randomized policy; and a policy based on\nthe well-known upper confidence bound (UCB) policies - both of which explore\neach action at a rate that is a function of its network position. We show,\nunder mild assumptions, that these policies achieve the asymptotic lower bound\non the regret up to a multiplicative factor, independent of the network\nstructure. Finally, we use numerical examples on a real-world social network\nand a routing example network to demonstrate the benefits obtained by our\npolicies over other existing policies.\n",
"title": "Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks"
}
| null | null | null | null | true | null |
1542
| null |
Default
| null | null |
null |
{
"abstract": " The goal of unbounded program verification is to discover an inductive\ninvariant that safely over-approximates all possible program behaviors.\nFunctional languages featuring higher order and recursive functions become more\npopular due to the domain-specific needs of big data analytics, web, and\nsecurity. We present Rosette/Unbound, the first program verifier for Racket\nexploiting the automated constrained Horn solver on its backend. One of the key\nfeatures of Rosette/Unbound is the ability to synchronize recursive\ncomputations over the same inputs allowing to verify programs that iterate over\nunbounded data streams multiple times. Rosette/Unbound is successfully\nevaluated on a set of non-trivial recursive and higher order functional\nprograms.\n",
"title": "Verifying Safety of Functional Programs with Rosette/Unbound"
}
| null | null | null | null | true | null |
1543
| null |
Default
| null | null |
null |
{
"abstract": " We present a simple proof of the fact that the base (and independence)\npolytope of a rank $n$ regular matroid over $m$ elements has an extension\ncomplexity $O(mn)$.\n",
"title": "Extended Formulations for Polytopes of Regular Matroids"
}
| null | null | null | null | true | null |
1544
| null |
Default
| null | null |
null |
{
"abstract": " Modern multiscale type segmentation methods are known to detect multiple\nchange-points with high statistical accuracy, while allowing for fast\ncomputation. Underpinning theory has been developed mainly for models that\nassume the signal as a piecewise constant function. In this paper this will be\nextended to certain function classes beyond such step functions in a\nnonparametric regression setting, revealing certain multiscale segmentation\nmethods as robust to deviation from such piecewise constant functions. Our main\nfinding is the adaptation over such function classes for a universal\nthresholding, which includes bounded variation functions, and (piecewise)\nHölder functions of smoothness order $ 0 < \\alpha \\le1$ as special cases.\nFrom this we derive statistical guarantees on feature detection in terms of\njumps and modes. Another key finding is that these multiscale segmentation\nmethods perform nearly (up to a log-factor) as well as the oracle piecewise\nconstant segmentation estimator (with known jump locations), and the best\npiecewise constant approximants of the (unknown) true signal. Theoretical\nfindings are examined by various numerical simulations.\n",
"title": "Multiscale Change-point Segmentation: Beyond Step Functions"
}
| null | null | null | null | true | null |
1545
| null |
Default
| null | null |
null |
{
"abstract": " For the architecture community, reasonable simulation time is a strong\nrequirement in addition to performance data accuracy. However, emerging big\ndata and AI workloads are too huge at binary size level and prohibitively\nexpensive to run on cycle-accurate simulators. The concept of data motif, which\nis identified as a class of units of computation performed on initial or\nintermediate data, is the first step towards building proxy benchmark to mimic\nthe real-world big data and AI workloads. However, there is no practical way to\nconstruct a proxy benchmark based on the data motifs to help simulation-based\nresearch. In this paper, we embark on a study to bridge the gap between data\nmotif and a practical proxy benchmark. We propose a data motif-based proxy\nbenchmark generating methodology by means of machine learning method, which\ncombine data motifs with different weights to mimic the big data and AI\nworkloads. Furthermore, we implement various data motifs using light-weight\nstacks and apply the methodology to five real-world workloads to construct a\nsuite of proxy benchmarks, considering the data types, patterns, and\ndistributions. The evaluation results show that our proxy benchmarks shorten\nthe execution time by 100s times on real systems while maintaining the average\nsystem and micro-architecture performance data accuracy above 90%, even\nchanging the input data sets or cluster configurations. Moreover, the generated\nproxy benchmarks reflect consistent performance trends across different\narchitectures. To facilitate the community, we will release the proxy\nbenchmarks on the project homepage this http URL.\n",
"title": "Data Motif-based Proxy Benchmarks for Big Data and AI Workloads"
}
| null | null | null | null | true | null |
1546
| null |
Default
| null | null |
null |
{
"abstract": " Neighborhood regression has been a successful approach in graphical and\nstructural equation modeling, with applications to learning undirected and\ndirected graphical models. We extend these ideas by defining and studying an\nalgebraic structure called the neighborhood lattice based on a generalized\nnotion of neighborhood regression. We show that this algebraic structure has\nthe potential to provide an economic encoding of all conditional independence\nstatements in a Gaussian distribution (or conditional uncorrelatedness in\ngeneral), even in the cases where no graphical model exists that could\n\"perfectly\" encode all such statements. We study the computational complexity\nof computing these structures and show that under a sparsity assumption, they\ncan be computed in polynomial time, even in the absence of the assumption of\nperfectness to a graph. On the other hand, assuming perfectness, we show how\nthese neighborhood lattices may be \"graphically\" computed using the separation\nproperties of the so-called partial correlation graph. We also draw connections\nwith directed acyclic graphical models and Bayesian networks. We derive these\nresults using an abstract generalization of partial uncorrelatedness, called\npartial orthogonality, which allows us to use algebraic properties of\nprojection operators on Hilbert spaces to significantly simplify and extend\nexisting ideas and arguments. Consequently, our results apply to a wide range\nof random objects and data structures, such as random vectors, data matrices,\nand functions.\n",
"title": "The neighborhood lattice for encoding partial correlations in a Hilbert space"
}
| null | null | null | null | true | null |
1547
| null |
Default
| null | null |
null |
{
"abstract": " Pseudo-random sequences with good statistical property, such as low\nautocorrelation, high linear complexity and large 2-adic complexity, have been\napplied in stream cipher. In general, it is difficult to give both the linear\ncomplexity and 2-adic complexity of a periodic binary sequence. Cai and Ding\n\\cite{Cai Ying} gave a class of sequences with almost optimal autocorrelation\nby constructing almost difference sets. Wang \\cite{Wang Qi} proved that one\ntype of those sequences by Cai and Ding has large linear complexity. Sun et al.\n\\cite{Sun Yuhua} showed that another type of sequences by Cai and Ding has also\nlarge linear complexity. Additionally, Sun et al. also generalized the\nconstruction by Cai and Ding using $d$-form function with difference-balanced\nproperty. In this paper, we first give the detailed autocorrelation\ndistribution of the sequences was generalized from Cai and Ding \\cite{Cai Ying}\nby Sun et al. \\cite{Sun Yuhua}. Then, inspired by the method of Hu \\cite{Hu\nHonggang}, we analyse their 2-adic complexity and give a lower bound on the\n2-adic complexity of these sequences. Our result show that the 2-adic\ncomplexity of these sequences is at least $N-\\mathrm{log}_2\\sqrt{N+1}$ and that\nit reach $N-1$ in many cases, which are large enough to resist the rational\napproximation algorithm (RAA) for feedback with carry shift registers (FCSRs).\n",
"title": "The 2-adic complexity of a class of binary sequences with almost optimal autocorrelation"
}
| null | null | null | null | true | null |
1548
| null |
Default
| null | null |
null |
{
"abstract": " A bilevel hierarchical clustering model is commonly used in designing optimal\nmulticast networks. In this paper, we consider two different formulations of\nthe bilevel hierarchical clustering problem, a discrete optimization problem\nwhich can be shown to be NP-hard. Our approach is to reformulate the problem as\na continuous optimization problem by making some relaxations on the\ndiscreteness conditions. Then Nesterov's smoothing technique and a numerical\nalgorithm for minimizing differences of convex functions called the DCA are\napplied to cope with the nonsmoothness and nonconvexity of the problem.\nNumerical examples are provided to illustrate our method.\n",
"title": "Nesterov's Smoothing Technique and Minimizing Differences of Convex Functions for Hierarchical Clustering"
}
| null | null | null | null | true | null |
1549
| null |
Default
| null | null |
null |
{
"abstract": " Generalized Lambda-semiflows are an abstraction of semiflows with\nnon-periodic solutions, for which there may be more than one solution\ncorresponding to given initial data. A select class of solutions to generalized\nLambda-semiflows is introduced. It is proved that such minimal solutions are\nunique corresponding to given ranges and generate all other solutions by time\nreparametrization. Special qualities of minimal solutions are shown. The\nconcept of minimal solutions is applied to gradient flows in metric spaces and\ngeneralized semiflows. Generalized semiflows have been introduced by Ball.\n",
"title": "Minimal solutions to generalized Lambda-semiflows and gradient flows in metric spaces"
}
| null | null | null | null | true | null |
1550
| null |
Default
| null | null |
null |
{
"abstract": " We consider variants of trust-region and cubic regularization methods for\nnon-convex optimization, in which the Hessian matrix is approximated. Under\nmild conditions on the inexact Hessian, and using approximate solution of the\ncorresponding sub-problems, we provide iteration complexity to achieve $\n\\epsilon $-approximate second-order optimality which have shown to be tight.\nOur Hessian approximation conditions constitute a major relaxation over the\nexisting ones in the literature. Consequently, we are able to show that such\nmild conditions allow for the construction of the approximate Hessian through\nvarious random sampling methods. In this light, we consider the canonical\nproblem of finite-sum minimization, provide appropriate uniform and non-uniform\nsub-sampling strategies to construct such Hessian approximations, and obtain\noptimal iteration complexity for the corresponding sub-sampled trust-region and\ncubic regularization methods.\n",
"title": "Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information"
}
| null | null | null | null | true | null |
1551
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the space of differentiable functions on a quad-mesh $\\cM$, which\nare composed of 4-split spline macro-patch elements on each quadrangular face.\nWe describe explicit transition maps across shared edges, that satisfy\nconditions which ensure that the space of differentiable functions is ample on\na quad-mesh of arbitrary topology. These transition maps define a finite\ndimensional vector space of $G^{1}$ spline functions of bi-degree $\\le (k,k)$\non each quadrangular face of $\\cM$. We determine the dimension of this space of\n$G^{1}$ spline functions for $k$ big enough and provide explicit constructions\nof basis functions attached respectively to vertices, edges and faces. This\nconstruction requires the analysis of the module of syzygies of univariate\nb-spline functions with b-spline function coefficients. New results on their\ngenerators and dimensions are provided. Examples of bases of $G^{1}$ splines of\nsmall degree for simple topological surfaces are detailed and illustrated by\nparametric surface constructions.\n",
"title": "$G 1$-smooth splines on quad meshes with 4-split macro-patch elements"
}
| null | null | null | null | true | null |
1552
| null |
Default
| null | null |
null |
{
"abstract": " Bangla handwriting recognition is becoming a very important issue nowadays.\nIt is potentially a very important task specially for Bangla speaking\npopulation of Bangladesh and West Bengal. By keeping that in our mind we are\nintroducing a comprehensive Bangla handwritten character dataset named\nBanglaLekha-Isolated. This dataset contains Bangla handwritten numerals, basic\ncharacters and compound characters. This dataset was collected from multiple\ngeographical location within Bangladesh and includes sample collected from a\nvariety of aged groups. This dataset can also be used for other classification\nproblems i.e: gender, age, district. This is the largest dataset on Bangla\nhandwritten characters yet.\n",
"title": "BanglaLekha-Isolated: A Comprehensive Bangla Handwritten Character Dataset"
}
| null | null | null | null | true | null |
1553
| null |
Default
| null | null |
null |
{
"abstract": " In this article, we continue the study of the problem of $L^p$-boundedness of\nthe maximal operator $M$ associated to averages along isotropic dilates of a\ngiven, smooth hypersurface $S$ of finite type in 3-dimensional Euclidean space.\nAn essentially complete answer to this problem had been given about seven years\nago by the last named two authors in joint work with M. Kempe for the case\nwhere the height h of the given surface is at least two. In the present\narticle, we turn to the case $h<2.$ More precisely, in this Part I, we study\nthe case where $h<2,$ assuming that $S$ is contained in a sufficiently small\nneighborhood of a given point $x^0\\in S$ at which both principal curvatures of\n$S$ vanish. Under these assumptions and a natural transversality assumption, we\nshow that, as in the case where $h\\ge 2,$ the critical Lebesgue exponent for\nthe boundedness of $M$ remains to be $p_c=h,$ even though the proof of this\nresult turns out to require new methods, some of which are inspired by the more\nrecent work by the last named two authors on Fourier restriction to S. Results\non the case where $h<2$ and exactly one principal curvature of $S$ does not\nvanish at $x^0$ will appear elsewhere.\n",
"title": "Estimates for maximal functions associated to hypersurfaces in $\\Bbb R^3$ with height $h<2:$ Part I"
}
| null | null |
[
"Mathematics"
] | null | true | null |
1554
| null |
Validated
| null | null |
null |
{
"abstract": " In recent years, MEMS inertial sensors (3D accelerometers and 3D gyroscopes)\nhave become widely available due to their small size and low cost. Inertial\nsensor measurements are obtained at high sampling rates and can be integrated\nto obtain position and orientation information. These estimates are accurate on\na short time scale, but suffer from integration drift over longer time scales.\nTo overcome this issue, inertial sensors are typically combined with additional\nsensors and models. In this tutorial we focus on the signal processing aspects\nof position and orientation estimation using inertial sensors. We discuss\ndifferent modeling choices and a selected number of important algorithms. The\nalgorithms include optimization-based smoothing and filtering as well as\ncomputationally cheaper extended Kalman filter and complementary filter\nimplementations. The quality of their estimates is illustrated using both\nexperimental and simulated data.\n",
"title": "Using Inertial Sensors for Position and Orientation Estimation"
}
| null | null | null | null | true | null |
1555
| null |
Default
| null | null |
null |
{
"abstract": " In classical mechanics, a nonrelativistic particle constrained on an $N-1$\ncurved hypersurface embedded in $N$ flat space experiences the centripetal\nforce only. In quantum mechanics, the situation is totally different for the\npresence of the geometric potential. We demonstrate that the motion of the\nquantum particle is \"driven\" by not only the the centripetal force, but also a\ncurvature induced force proportional to the Laplacian of the mean curvature,\nwhich is fundamental in the interface physics, causing curvature driven\ninterface evolution.\n",
"title": "Heisenberg equation for a nonrelativistic particle on a hypersurface: from the centripetal force to a curvature induced force"
}
| null | null | null | null | true | null |
1556
| null |
Default
| null | null |
null |
{
"abstract": " A new Bayesian framework is presented that can constrain projections of\nfuture climate using historical observations by exploiting robust estimates of\nemergent relationships between multiple climate models. We argue that emergent\nrelationships can be interpreted as constraints on model inadequacy, but that\nprojections may be biased if we do not account for internal variability in\nclimate model projections. We extend the previously proposed coexchangeable\nframework to account for natural variability in the Earth system and internal\nvariability simulated by climate models. A detailed theoretical comparison with\nprevious multi-model projection frameworks is provided.\nThe proposed framework is applied to projecting surface temperature in the\nArctic at the end of the 21st century. A subset of available climate models are\nselected in order to satisfy the assumptions of the framework. All available\ninitial condition runs from each model are utilized in order maximize the\nutility of the data. Projected temperatures in some regions are more than 2C\nlower when constrained by historical observations. The uncertainty about the\nclimate response is reduced by up to 30% where strong constraints exist.\n",
"title": "On constraining projections of future climate using observations and simulations from multiple climate models"
}
| null | null |
[
"Statistics"
] | null | true | null |
1557
| null |
Validated
| null | null |
null |
{
"abstract": " Molecular interactions have widely been modelled as networks. The local\nwiring patterns around molecules in molecular networks are linked with their\nbiological functions. However, networks model only pairwise interactions\nbetween molecules and cannot explicitly and directly capture the higher order\nmolecular organisation, such as protein complexes and pathways. Hence, we ask\nif hypergraphs (hypernetworks), that directly capture entire complexes and\npathways along with protein-protein interactions (PPIs), carry additional\nfunctional information beyond what can be uncovered from networks of pairwise\nmolecular interactions. The mathematical formalism of a hypergraph has long\nbeen known, but not often used in studying molecular networks due to the lack\nof sophisticated algorithms for mining the underlying biological information\nhidden in the wiring patterns of molecular systems modelled as hypernetworks.\nWe propose a new, multi-scale, protein interaction hypernetwork model that\nutilizes hypergraphs to capture different scales of protein organization,\nincluding PPIs, protein complexes and pathways. In analogy to graphlets, we\nintroduce hypergraphlets, small, connected, non-isomorphic, induced\nsub-hypergraphs of a hypergraph, to quantify the local wiring patterns of these\nmulti-scale molecular hypergraphs and to mine them for new biological\ninformation. We apply them to model the multi-scale protein networks of baker\nyeast and human and show that the higher order molecular organisation captured\nby these hypergraphs is strongly related to the underlying biology.\nImportantly, we demonstrate that our new models and data mining tools reveal\ndifferent, but complementary biological information compared to classical PPI\nnetworks. We apply our hypergraphlets to successfully predict biological\nfunctions of uncharacterised proteins.\n",
"title": "Higher order molecular organisation as a source of biological function"
}
| null | null | null | null | true | null |
1558
| null |
Default
| null | null |
null |
{
"abstract": " If accreting white dwarfs (WD) in binary systems are to produce type Ia\nsupernovae (SNIa), they must grow to nearly the Chandrasekhar mass and ignite\ncarbon burning. Proving conclusively that a WD has grown substantially since\nits birth is a challenging task. Slow accretion of hydrogen inevitably leads to\nthe erosion, rather than the growth of WDs. Rapid hydrogen accretion does lead\nto growth of a helium layer, due to both decreased degeneracy and the\ninhibition of mixing of the accreted hydrogen with the underlying WD. However,\nuntil recently, simulations of helium-accreting WDs all claimed to show the\nexplosive ejection of a helium envelope once it exceeded $\\sim 10^{-1}\\, \\rm\nM_{\\odot}$. Because CO WDs cannot be born with masses in excess of $\\sim 1.1\\,\n\\rm M_{\\odot}$, any such object, in excess of $\\sim 1.2\\, \\rm M_{\\odot}$, must\nhave grown substantially. We demonstrate that the WD in the symbiotic nova RS\nOph is in the mass range 1.2-1.4\\,M$_{\\odot}$. We compare UV spectra of RS Oph\nwith those of novae with ONe WDs, and with novae erupting on CO WDs. The RS Oph\nWD is clearly made of CO, demonstrating that it has grown substantially since\nbirth. It is a prime candidate to eventually produce an SNIa.\n",
"title": "The Massive CO White Dwarf in the Symbiotic Recurrent Nova RS Ophiuchi"
}
| null | null | null | null | true | null |
1559
| null |
Default
| null | null |
null |
{
"abstract": " If the face-cycles at all the vertices in a map on a surface are of same type\nthen the map is called semi-equivelar. There are eleven types of Archimedean\ntilings on the plane. All the Archimedean tilings are semi-equivelar maps. If a\nmap $X$ on the torus is a quotient of an Archimedean tiling on the plane then\nthe map $X$ is semi-equivelar. We show that each semi-equivelar map on the\ntorus is a quotient of an Archimedean tiling on the plane.\nVertex-transitive maps are semi-equivelar maps. We know that four types of\nsemi-equivelar maps on the torus are always vertex-transitive and there are\nexamples of other seven types of semi-equivelar maps which are not\nvertex-transitive. We show that the number of ${\\rm Aut}(Y)$-orbits of vertices\nfor any semi-equivelar map $Y$ on the torus is at most six. In fact, the number\nof orbits is at most three except one type of semi-equivelar maps. Our bounds\non the number of orbits are sharp.\n",
"title": "Semi-equivelar maps on the torus are Archimedean"
}
| null | null | null | null | true | null |
1560
| null |
Default
| null | null |
null |
{
"abstract": " We consider the dynamics of porous icy dust aggregates in a turbulent gas\ndisk and investigate the stability of the disk. We evaluate the random velocity\nof porous dust aggregates by considering their self-gravity, collisions,\naerodynamic drag, turbulent stirring and scattering due to gas. We extend our\nprevious work by introducing the anisotropic velocity dispersion and the\nrelaxation time of the random velocity. We find the minimum mass solar nebular\nmodel to be gravitationally unstable if the turbulent viscosity parameter\n$\\alpha$ is less than about $4 \\times 10^{-3}$. The upper limit of $\\alpha$ for\nthe onset of gravitational instability is derived as a function of the disk\nparameters. We discuss the implications of the gravitational instability for\nplanetesimal formation.\n",
"title": "Dynamics of Porous Dust Aggregates and Gravitational Instability of Their Disk"
}
| null | null | null | null | true | null |
1561
| null |
Default
| null | null |
null |
{
"abstract": " Urbach tails in semiconductors are often associated to effects of\ncompositional disorder. The Urbach tail observed in InGaN alloy quantum wells\nof solar cells and LEDs by biased photocurrent spectroscopy is shown to be\ncharacteristic of the ternary alloy disorder. The broadening of the absorption\nedge observed for quantum wells emitting from violet to green (indium content\nranging from 0 to 28\\%) corresponds to a typical Urbach energy of 20~meV. A 3D\nabsorption model is developed based on a recent theory of disorder-induced\nlocalization which provides the effective potential seen by the localized\ncarriers without having to resort to the solution of the Schrödinger equation\nin a disordered potential. This model incorporating compositional disorder\naccounts well for the experimental broadening of the Urbach tail of the\nabsorption edge. For energies below the Urbach tail of the InGaN quantum wells,\ntype-II well-to-barrier transitions are observed and modeled. This contribution\nto the below bandgap absorption is particularly efficient in near-UV emitting\nquantum wells. When reverse biasing the device, the well-to-barrier below\nbandgap absorption exhibits a red shift, while the Urbach tail corresponding to\nthe absorption within the quantum wells is blue shifted, due to the partial\ncompensation of the internal piezoelectric fields by the external bias. The\ngood agreement between the measured Urbach tail and its modeling by the new\nlocalization theory demonstrates the applicability of the latter to\ncompositional disorder effects in nitride semiconductors.\n",
"title": "Localization landscape theory of disorder in semiconductors II: Urbach tails of disordered quantum well layers"
}
| null | null | null | null | true | null |
1562
| null |
Default
| null | null |
null |
{
"abstract": " Recent work has provided ample evidence that global climate dynamics at\ntime-scales between multiple weeks and several years can be severely affected\nby the episodic occurrence of both, internal (climatic) and external\n(non-climatic) perturbations. Here, we aim to improve our understanding on how\nregional to local disruptions of the \"normal\" state of the global surface air\ntemperature field affect the corresponding global teleconnectivity structure.\nSpecifically, we present an approach to quantify teleconnectivity based on\ndifferent characteristics of functional climate network analysis. Subsequently,\nwe apply this framework to study the impacts of different phases of the El\nNiño-Southern Oscillation (ENSO) as well as the three largest volcanic\neruptions since the mid 20th century on the dominating spatiotemporal\nco-variability patterns of daily surface air temperatures. Our results confirm\nthe existence of global effects of ENSO which result in episodic breakdowns of\nthe hierarchical organization of the global temperature field. This is\nassociated with the emergence of strong teleconnections. At more regional\nscales, similar effects are found after major volcanic eruptions. Taken\ntogether, the resulting time-dependent patterns of network connectivity allow a\ntracing of the spatial extents of the dominating effects of both types of\nclimate disruptions. We discuss possible links between these observations and\ngeneral aspects of atmospheric circulation.\n",
"title": "Global teleconnectivity structures of the El Niño-Southern Oscillation and large volcanic eruptions -- An evolving network perspective"
}
| null | null | null | null | true | null |
1563
| null |
Default
| null | null |
null |
{
"abstract": " Shrinkage estimation usually reduces variance at the cost of bias. But when\nwe care only about some parameters of a model, I show that we can reduce\nvariance without incurring bias if we have additional information about the\ndistribution of covariates. In a linear regression model with homoscedastic\nNormal noise, I consider shrinkage estimation of the nuisance parameters\nassociated with control variables. For at least three control variables and\nexogenous treatment, I establish that the standard least-squares estimator is\ndominated with respect to squared-error loss in the treatment effect even among\nunbiased estimators and even when the target parameter is low-dimensional. I\nconstruct the dominating estimator by a variant of James-Stein shrinkage in a\nhigh-dimensional Normal-means problem. It can be interpreted as an invariant\ngeneralized Bayes estimator with an uninformative (improper) Jeffreys prior in\nthe target parameter.\n",
"title": "Unbiased Shrinkage Estimation"
}
| null | null | null | null | true | null |
1564
| null |
Default
| null | null |
null |
{
"abstract": " Continuous integration (CI) tools integrate code changes by automatically\ncompiling, building, and executing test cases upon submission of code changes.\nUse of CI tools is getting increasingly popular, yet how proprietary projects\nreap the benefits of CI remains unknown. To investigate the influence of CI on\nsoftware development, we analyze 150 open source software (OSS) projects, and\n123 proprietary projects. For OSS projects, we observe the expected benefits\nafter CI adoption, e.g., improvements in bug and issue resolution. However, for\nthe proprietary projects, we cannot make similar observations. Our findings\nindicate that only adoption of CI might not be enough to the improve software\ndevelopment process. CI can be effective for software development if\npractitioners use CI's feedback mechanism efficiently, by applying the practice\nof making frequent commits. For our set of proprietary projects we observe\npractitioners commit less frequently, and hence not use CI effectively for\nobtaining feedback on the submitted code changes. Based on our findings we\nrecommend industry practitioners to adopt the best practices of CI to reap the\nbenefits of CI tools for example, making frequent commits.\n",
"title": "Characterizing The Influence of Continuous Integration. Empirical Results from 250+ Open Source and Proprietary Projects"
}
| null | null | null | null | true | null |
1565
| null |
Default
| null | null |
null |
{
"abstract": " Amyloid precursor with 770 amino acids dimerizes and aggregates, as do its c\nterminal 99 amino acids and amyloid 40,42 amino acids fragments. The titled\nquestion has been discussed extensively, and here it is addressed further using\nthermodynamic scaling theory to analyze mutational trends in structural factors\nand kinetics. Special attention is given to Family Alzheimer's Disease\nmutations outside amyloid 42. The scaling analysis is connected to extensive\ndocking simulations which included membranes, thereby confirming their results\nand extending them to Amyloid precursor.\n",
"title": "Why Abeta42 Is Much More Toxic Than Abeta40"
}
| null | null | null | null | true | null |
1566
| null |
Default
| null | null |
null |
{
"abstract": " An ever-important issue is protecting infrastructure and other valuable\ntargets from a range of threats from vandalism to theft to piracy to terrorism.\nThe \"defender\" can rarely afford the needed resources for a 100% protection.\nThus, the key question is, how to provide the best protection using the limited\navailable resources. We study a practically important class of security games\nthat is played out in space and time, with targets and \"patrols\" moving on a\nreal line. A central open question here is whether the Nash equilibrium (i.e.,\nthe minimax strategy of the defender) can be computed in polynomial time. We\nresolve this question in the affirmative. Our algorithm runs in time polynomial\nin the input size, and only polylogarithmic in the number of possible patrol\nlocations (M). Further, we provide a continuous extension in which patrol\nlocations can take arbitrary real values. Prior work obtained polynomial-time\nalgorithms only under a substantial assumption, e.g., a constant number of\nrounds. Further, all these algorithms have running times polynomial in M, which\ncan be very large.\n",
"title": "A Polynomial Time Algorithm for Spatio-Temporal Security Games"
}
| null | null | null | null | true | null |
1567
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we introduce a method for adapting the step-sizes of temporal\ndifference (TD) learning. The performance of TD methods often depends on well\nchosen step-sizes, yet few algorithms have been developed for setting the\nstep-size automatically for TD learning. An important limitation of current\nmethods is that they adapt a single step-size shared by all the weights of the\nlearning system. A vector step-size enables greater optimization by specifying\nparameters on a per-feature basis. Furthermore, adapting parameters at\ndifferent rates has the added benefit of being a simple form of representation\nlearning. We generalize Incremental Delta Bar Delta (IDBD)---a vectorized\nadaptive step-size method for supervised learning---to TD learning, which we\nname TIDBD. We demonstrate that TIDBD is able to find appropriate step-sizes in\nboth stationary and non-stationary prediction tasks, outperforming ordinary TD\nmethods and TD methods with scalar step-size adaptation; we demonstrate that it\ncan differentiate between features which are relevant and irrelevant for a\ngiven task, performing representation learning; and we show on a real-world\nrobot prediction task that TIDBD is able to outperform ordinary TD methods and\nTD methods augmented with AlphaBound and RMSprop.\n",
"title": "TIDBD: Adapting Temporal-difference Step-sizes Through Stochastic Meta-descent"
}
| null | null | null | null | true | null |
1568
| null |
Default
| null | null |
null |
{
"abstract": " The over threshold carbon-loadings (~50 at.%) of initial TiO2-hosts and\nposterior Cu-sensitization (~7 at.%) was made using pulsed ion-implantation\ntechnique in sequential mode with 1 hour vacuum-idle cycle between sequential\nstages of embedding. The final Cx-TiO2:Cu samples were qualified using XPS\nwide-scan elemental analysis, core-levels and valence band mappings. The\nresults obtained were discussed on the theoretic background employing\nDFT-calculations. The combined XPS and DFT analysis allows to establish and\nprove the final formula of the synthesized samples as Cx-TiO2:[Cu+][Cu2+] for\nthe bulk and Cx-TiO2:[Cu+][Cu0] for thin-films. It was demonstrated the in the\nmode of heavy carbon-loadings the remaining majority of neutral C-C bonds\n(sp3-type) is dominating and only a lack of embedded carbon is fabricating the\nO-C=O clusters. No valence base-band width altering was established after\nsequential carbon-copper modification of the atomic structure of initial\nTiO2-hosts except the dominating majority of Cu 3s states after\nCu-sensitization. The crucial role of neutral carbon low-dimensional impurities\nas the precursors for the new phases growth was shown for Cu-sensitized Cx-TiO2\nintermediate-state hosts.\n",
"title": "Enhanced clustering tendency of Cu-impurities with a number of oxygen vacancies in heavy carbon-loaded TiO2 - the bulk and surface morphologies"
}
| null | null | null | null | true | null |
1569
| null |
Default
| null | null |
null |
{
"abstract": " Several theorems on the volume computing of the polyhedron spanned by a\nn-dimensional vector set with the finite-interval parameters are presented and\nproved firstly, and then are used in the analysis of the controllable regions\nof the linear discrete time-invariant systems with saturated inputs. A new\nconcept and continuous measure on the control ability, control efficiency of\nthe input variables, and the diversity of the control laws, named as the\ncontrollable abundance, is proposed based on the volume computing of the\nregions and is applied to the actuator placing and configuring problems, the\noptimizing problems of dynamics and kinematics of the controlled plants, etc..\nThe numerical experiments show the effectiveness of the new concept and methods\nfor investigating and optimizing the control ability and efficiency.\n",
"title": "On Controllable Abundance Of Saturated-input Linear Discrete Systems"
}
| null | null | null | null | true | null |
1570
| null |
Default
| null | null |
null |
{
"abstract": " Organic material in anoxic sediment represents a globally significant carbon\nreservoir that acts to stabilize Earth's atmospheric composition. The dynamics\nby which microbes organize to consume this material remain poorly understood.\nHere we observe the collective dynamics of a microbial community, collected\nfrom a salt marsh, as it comes to steady state in a two-dimensional ecosystem,\ncovered by flowing water and under constant illumination. Microbes form a very\nthin front at the oxic-anoxic interface that moves towards the surface with\nconstant velocity and comes to rest at a fixed depth. Fronts are stable to all\nperturbations while in the sediment, but develop bioconvective plumes in water.\nWe observe the transient formation of parallel fronts. We model these dynamics\nto understand how they arise from the coupling between metabolism, aerotaxis,\nand diffusion. These results identify the typical timescale for the oxygen flux\nand penetration depth to reach steady state.\n",
"title": "Localization and dynamics of sulfur-oxidizing microbes in natural sediment"
}
| null | null | null | null | true | null |
1571
| null |
Default
| null | null |
null |
{
"abstract": " With the recent development of high-end LiDARs, more and more systems are\nable to continuously map the environment while moving and producing spatially\nredundant information. However, none of the previous approaches were able to\neffectively exploit this redundancy in a dense LiDAR mapping problem. In this\npaper, we present a new approach for dense LiDAR mapping using probabilistic\nsurfel fusion. The proposed system is capable of reconstructing a high-quality\ndense surface element (surfel) map from spatially redundant multiple views.\nThis is achieved by a proposed probabilistic surfel fusion along with a\ngeometry considered data association. The proposed surfel data association\nmethod considers surface resolution as well as high measurement uncertainty\nalong its beam direction which enables the mapping system to be able to control\nsurface resolution without introducing spatial digitization. The proposed\nfusion method successfully suppresses the map noise level by considering\nmeasurement noise caused by laser beam incident angle and depth distance in a\nBayesian filtering framework. Experimental results with simulated and real data\nfor the dense surfel mapping prove the ability of the proposed method to\naccurately find the canonical form of the environment without further\npost-processing.\n",
"title": "Probabilistic Surfel Fusion for Dense LiDAR Mapping"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1572
| null |
Validated
| null | null |
null |
{
"abstract": " Motivated by the proposal of topological quantum paramagnet in the diamond\nlattice antiferromagnet NiRh$_2$O$_4$, we propose a minimal model to describe\nthe magnetic interaction and properties of the diamond material with the\nspin-one local moments. Our model includes the first and second neighbor\nHeisenberg interactions as well as a local single-ion spin anisotropy that is\nallowed by the spin-one nature of the local moment and the tetragonal symmetry\nof the system. We point out that there exists a quantum phase transition from a\ntrivial quantum paramagnet when the single-ion spin anisotropy is dominant to\nthe magnetic ordered states when the exchange is dominant. Due to the\nfrustrated spin interaction, the magnetic excitation in the quantum\nparamagnetic state supports extensively degenerate band minima in the spectra.\nAs the system approaches the transition, extensively degenerate bosonic modes\nbecome critical at the criticality, giving rise to unusual magnetic properties.\nOur phase diagram and experimental predictions for different phases provide a\nguildeline for the identification of the ground state for NiRh$_2$O$_4$.\nAlthough our results are fundamentally different from the proposal of\ntopological quantum paramagnet, it represents interesting possibilities for\nspin-one diamond lattice antiferromagnets.\n",
"title": "Quantum Paramagnet and Frustrated Quantum Criticality in a Spin-One Diamond Lattice Antiferromagnet"
}
| null | null | null | null | true | null |
1573
| null |
Default
| null | null |
null |
{
"abstract": " A graph is said to be well-dominated if all its minimal dominating sets are\nof the same size. The class of well-dominated graphs forms a subclass of the\nwell studied class of well-covered graphs. While the recognition problem for\nthe class of well-covered graphs is known to be co-NP-complete, the recognition\ncomplexity of well-dominated graphs is open.\nIn this paper we introduce the notion of an irreducible dominating set, a\nvariant of dominating set generalizing both minimal dominating sets and minimal\ntotal dominating sets. Based on this notion, we characterize the family of\nminimal dominating sets in a lexicographic product of two graphs and derive a\ncharacterization of the well-dominated lexicographic product graphs. As a side\nresult motivated by this study, we give a polynomially testable\ncharacterization of well-dominated graphs with domination number two, and show,\nmore generally, that well-dominated graphs can be recognized in polynomial time\nin any class of graphs with bounded domination number. Our results include a\ncharacterization of dominating sets in lexicographic product graphs, which\ngeneralizes the expression for the domination number of such graphs following\nfrom works of Zhang et al. (2011) and of Šumenjak et al. (2012).\n",
"title": "Characterizations of minimal dominating sets and the well-dominated property in lexicographic product graphs"
}
| null | null | null | null | true | null |
1574
| null |
Default
| null | null |
null |
{
"abstract": " We describe here the latest results of calculations with FlexPDE code of\nwake-fields induced by the bunch in micro-structures. These structures,\nilluminated by swept laser bust, serve for acceleration of charged particles.\nThe basis of the scheme is a fast sweeping device for the laser bunch. After\nsweeping, the laser bunch has a slope ~45o with respect to the direction of\npropagation. So the every cell of the microstructure becomes excited locally\nonly for the moment when the particles are there. Self-consistent parameters of\ncollider based on this idea allow consideration this type of collider as a\ncandidate for the near-future accelerator era.\n",
"title": "To the Acceleration of Charged Particles with Travelling Laser Focus"
}
| null | null | null | null | true | null |
1575
| null |
Default
| null | null |
null |
{
"abstract": " Affiliation network is one kind of two-mode social network with two different\nsets of nodes (namely, a set of actors and a set of social events) and edges\nrepresenting the affiliation of the actors with the social events. Although a\nnumber of statistical models are proposed to analyze affiliation networks, the\nasymptotic behaviors of the estimator are still unknown or have not been\nproperly explored. In this paper, we study an affiliation model with the degree\nsequence as the exclusively natural sufficient statistic in the exponential\nfamily distributions. We establish the uniform consistency and asymptotic\nnormality of the maximum likelihood estimator when the numbers of actors and\nevents both go to infinity. Simulation studies and a real data example\ndemonstrate our theoretical results.\n",
"title": "Affiliation networks with an increasing degree sequence"
}
| null | null | null | null | true | null |
1576
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the running time of the Saukas-Song algorithm for selection on a\ncoarse grained multicomputer without expressing the running time in terms of\ncommunication rounds. This shows that while in the best case the Saukas-Song\nalgorithm runs in asymptotically optimal time, in general it does not. We\npropose other algorithms for coarse grained selection that have optimal\nexpected running time.\n",
"title": "Coarse Grained Parallel Selection"
}
| null | null | null | null | true | null |
1577
| null |
Default
| null | null |
null |
{
"abstract": " Agent-based Internet of Things (IoT) applications have recently emerged as\napplications that can involve sensors, wireless devices, machines and software\nthat can exchange data and be accessed remotely. Such applications have been\nproposed in several domains including health care, smart cities and\nagriculture. However, despite their increased adoption, deploying these\napplications in specific settings has been very challenging because of the\ncomplex static and dynamic variability of the physical devices such as sensors\nand actuators, the software application behavior and the environment in which\nthe application is embedded. In this paper, we propose a modeling approach for\nIoT analytics based on learning embodied agents (i.e. situated agents). The\napproach involves: (i) a variability model of IoT embodied agents; (ii)\nfeedback evaluative machine learning; and (iii) reconfiguration of a group of\nagents in accordance with environmental context. The proposed approach advances\nthe state of the art in that it facilitates the development of Agent-based IoT\napplications by explicitly capturing their complex and dynamic variabilities\nand supporting their self-configuration based on an context-aware and machine\nlearning-based approach.\n",
"title": "An IoT Analytics Embodied Agent Model based on Context-Aware Machine Learning"
}
| null | null | null | null | true | null |
1578
| null |
Default
| null | null |
null |
{
"abstract": " In this study, we introduce a new approach to combine multi-classifiers in an\nensemble system. Instead of using numeric membership values encountered in\nfixed combining rules, we construct interval membership values associated with\neach class prediction at the level of meta-data of observation by using\nconcepts of information granules. In the proposed method, uncertainty\n(diversity) of findings produced by the base classifiers is quantified by\ninterval-based information granules. The discriminative decision model is\ngenerated by considering both the bounds and the length of the obtained\nintervals. We select ten and then fifteen learning algorithms to build a\nheterogeneous ensemble system and then conducted the experiment on a number of\nUCI datasets. The experimental results demonstrate that the proposed approach\nperforms better than the benchmark algorithms including six fixed combining\nmethods, one trainable combining method, AdaBoost, Bagging, and Random\nSubspace.\n",
"title": "Aggregation of Classifiers: A Justifiable Information Granularity Approach"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
1579
| null |
Validated
| null | null |
null |
{
"abstract": " The paper is concerned with an in-body system gathering data for medical\npurposes. It is focused on communication between the following two components\nof the system: liposomes gathering the data inside human veins and a detector\ncollecting the data from liposomes. Foerster Resonance Energy Transfer (FRET)\nis considered as a mechanism for communication between the system components.\nThe usage of bioluminescent molecules as an energy source for generating FRET\nsignals is suggested and the performance evaluation of this approach is given.\nFRET transmission may be initiated without an aid of an external laser, which\nis crucial in case of communication taking place inside of human body. It is\nalso shown how to solve the problem of FRET signals recording. The usage of\nchannelrhodopsin molecules, able to receive FRET signals and convert them into\nvoltage, is proposed. The communication system is modelled with molecular\nstructures and spectral characteristics of the proposed molecules and further\nvalidated by using Monte Carlo computer simulations, calculating the data\nthroughput and the bit error rate.\n",
"title": "FRET-based nanocommunication with luciferase and channelrhodopsin molecules for in-body medical systems"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
1580
| null |
Validated
| null | null |
null |
{
"abstract": " We present FLASH (\\textbf{F}ast \\textbf{L}SH \\textbf{A}lgorithm for\n\\textbf{S}imilarity search accelerated with \\textbf{H}PC), a similarity search\nsystem for ultra-high dimensional datasets on a single machine, that does not\nrequire similarity computations and is tailored for high-performance computing\nplatforms. By leveraging a LSH style randomized indexing procedure and\ncombining it with several principled techniques, such as reservoir sampling,\nrecent advances in one-pass minwise hashing, and count based estimations, we\nreduce the computational and parallelization costs of similarity search, while\nretaining sound theoretical guarantees.\nWe evaluate FLASH on several real, high-dimensional datasets from different\ndomains, including text, malicious URL, click-through prediction, social\nnetworks, etc. Our experiments shed new light on the difficulties associated\nwith datasets having several million dimensions. Current state-of-the-art\nimplementations either fail on the presented scale or are orders of magnitude\nslower than FLASH. FLASH is capable of computing an approximate k-NN graph,\nfrom scratch, over the full webspam dataset (1.3 billion nonzeros) in less than\n10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam\ndataset, using brute-force ($n^2D$), will require at least 20 teraflops. We\nprovide CPU and GPU implementations of FLASH for replicability of our results.\n",
"title": "FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search"
}
| null | null | null | null | true | null |
1581
| null |
Default
| null | null |
null |
{
"abstract": " We propose a new neural sequence model training method in which the objective\nfunction is defined by $\\alpha$-divergence. We demonstrate that the objective\nfunction generalizes the maximum-likelihood (ML)-based and reinforcement\nlearning (RL)-based objective functions as special cases (i.e., ML corresponds\nto $\\alpha \\to 0$ and RL to $\\alpha \\to1$). We also show that the gradient of\nthe objective function can be considered a mixture of ML- and RL-based\nobjective gradients. The experimental results of a machine translation task\nshow that minimizing the objective function with $\\alpha > 0$ outperforms\n$\\alpha \\to 0$, which corresponds to ML-based methods.\n",
"title": "Neural Sequence Model Training via $α$-divergence Minimization"
}
| null | null | null | null | true | null |
1582
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks (NN) are extensively used for machine learning tasks\nsuch as image classification, perception and control of autonomous systems.\nIncreasingly, these deep NNs are also been deployed in high-assurance\napplications. Thus, there is a pressing need for developing techniques to\nverify neural networks to check whether certain user-expected properties are\nsatisfied. In this paper, we study a specific verification problem of computing\na guaranteed range for the output of a deep neural network given a set of\ninputs represented as a convex polyhedron. Range estimation is a key primitive\nfor verifying deep NNs. We present an efficient range estimation algorithm that\nuses a combination of local search and linear programming problems to\nefficiently find the maximum and minimum values taken by the outputs of the NN\nover the given input set. In contrast to recently proposed \"monolithic\"\noptimization approaches, we use local gradient descent to repeatedly find and\neliminate local minima of the function. The final global optimum is certified\nusing a mixed integer programming instance. We implement our approach and\ncompare it with Reluplex, a recently proposed solver for deep neural networks.\nWe demonstrate the effectiveness of the proposed approach for verification of\nNNs used in automated control as well as those used in classification.\n",
"title": "Output Range Analysis for Deep Neural Networks"
}
| null | null | null | null | true | null |
1583
| null |
Default
| null | null |
null |
{
"abstract": " Projection theorems of divergences enable us to find reverse projection of a\ndivergence on a specific statistical model as a forward projection of the\ndivergence on a different but rather \"simpler\" statistical model, which, in\nturn, results in solving a system of linear equations. Reverse projection of\ndivergences are closely related to various estimation methods such as the\nmaximum likelihood estimation or its variants in robust statistics. We consider\nprojection theorems of three parametric families of divergences that are widely\nused in robust statistics, namely the Rényi divergences (or the Cressie-Reed\npower divergences), density power divergences, and the relative\n$\\alpha$-entropy (or the logarithmic density power divergences). We explore\nthese projection theorems from the usual likelihood maximization approach and\nfrom the principle of sufficiency. In particular, we show the equivalence of\nsolving the estimation problems by the projection theorems of the respective\ndivergences and by directly solving the corresponding estimating equations. We\nalso derive the projection theorem for the density power divergences.\n",
"title": "Projection Theorems of Divergences and Likelihood Maximization Methods"
}
| null | null | null | null | true | null |
1584
| null |
Default
| null | null |
null |
{
"abstract": " Advanced persistent threats (APTs) are stealthy attacks which make use of\nsocial engineering and deception to give adversaries insider access to\nnetworked systems. Against APTs, active defense technologies aim to create and\nexploit information asymmetry for defenders. In this paper, we study a scenario\nin which a powerful defender uses honeynets for active defense in order to\nobserve an attacker who has penetrated the network. Rather than immediately\neject the attacker, the defender may elect to gather information. We introduce\nan undiscounted, infinite-horizon Markov decision process on a continuous state\nspace in order to model the defender's problem. We find a threshold of\ninformation that the defender should gather about the attacker before ejecting\nhim. Then we study the robustness of this policy using a Stackelberg game.\nFinally, we simulate the policy for a conceptual network. Our results provide a\nquantitative foundation for studying optimal timing for attacker engagement in\nnetwork defense.\n",
"title": "Optimal Timing in Dynamic and Robust Attacker Engagement During Advanced Persistent Threats"
}
| null | null | null | null | true | null |
1585
| null |
Default
| null | null |
null |
{
"abstract": " In an influential recent paper, Harvey et al (2015) derive an upper limit to\nthe self-interaction cross section of dark matter ($\\sigma_{\\rm DM} < 0.47$\ncm$^2$/g at 95\\% confidence) by averaging the dark matter-galaxy offsets in a\nsample of merging galaxy clusters. Using much more comprehensive data on the\nsame clusters, we identify several substantial errors in their offset\nmeasurements. Correcting these errors relaxes the upper limit on $\\sigma_{\\rm\nDM}$ to $\\lesssim 2$ cm$^2$/g, following the Harvey et al prescription for\nrelating offsets to cross sections in a simple solid body scattering model.\nFurthermore, many clusters in the sample violate the assumptions behind this\nprescription, so even this revised upper limit should be used with caution.\nAlthough this particular sample does not tightly constrain self-interacting\ndark matter models when analyzed this way, we discuss how merger ensembles may\nbe used more effectively in the future. We conclude that errors inherent in\nusing single-band imaging to identify mass and light peaks do not necessarily\naverage out in a sample of this size, particularly when a handful of\nsubstructures constitute a majority of the weight in the ensemble.\n",
"title": "The Mismeasure of Mergers: Revised Limits on Self-interacting Dark Matter in Merging Galaxy Clusters"
}
| null | null | null | null | true | null |
1586
| null |
Default
| null | null |
null |
{
"abstract": " Analyzing available FAO data from 176 countries over 21 years, we observe an\nincrease of complexity in the international trade of maize, rice, soy, and\nwheat. A larger number of countries play a role as producers or intermediaries,\neither for trade or food processing. In consequence, we find that the trade\nnetworks become more prone to failure cascades caused by exogenous shocks. In\nour model, countries compensate for demand deficits by imposing export\nrestrictions. To capture these, we construct higher-order trade dependency\nnetworks for the different crops and years. These networks reveal hidden\ndependencies between countries and allow to discuss policy implications.\n",
"title": "International crop trade networks: The impact of shocks and cascades"
}
| null | null | null | null | true | null |
1587
| null |
Default
| null | null |
null |
{
"abstract": " Debate and deliberation play essential roles in politics and government, but\nmost models presume that debates are won mainly via superior style or agenda\ncontrol. Ideally, however, debates would be won on the merits, as a function of\nwhich side has the stronger arguments. We propose a predictive model of debate\nthat estimates the effects of linguistic features and the latent persuasive\nstrengths of different topics, as well as the interactions between the two.\nUsing a dataset of 118 Oxford-style debates, our model's combination of content\n(as latent topics) and style (as linguistic features) allows us to predict\naudience-adjudicated winners with 74% accuracy, significantly outperforming\nlinguistic features alone (66%). Our model finds that winning sides employ\nstronger arguments, and allows us to identify the linguistic features\nassociated with strong or weak arguments.\n",
"title": "Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1588
| null |
Validated
| null | null |
null |
{
"abstract": " Modelling gene regulatory networks not only requires a thorough understanding\nof the biological system depicted but also the ability to accurately represent\nthis system from a mathematical perspective. Throughout this chapter, we aim to\nfamiliarise the reader with the biological processes and molecular factors at\nplay in the process of gene expression regulation.We first describe the\ndifferent interactions controlling each step of the expression process, from\ntranscription to mRNA and protein decay. In the second section, we provide\nstatistical tools to accurately represent this biological complexity in the\nform of mathematical models. Amongst other considerations, we discuss the\ntopological properties of biological networks, the application of deterministic\nand stochastic frameworks and the quantitative modelling of regulation. We\nparticularly focus on the use of such models for the simulation of expression\ndata that can serve as a benchmark for the testing of network inference\nalgorithms.\n",
"title": "Gene regulatory networks: a primer in biological processes and statistical modelling"
}
| null | null | null | null | true | null |
1589
| null |
Default
| null | null |
null |
{
"abstract": " As David Berlinski writes (1997), the existence and nature of mathematics is\na more compelling and far deeper problem than any of the problems raised by\nmathematics itself. Here we analyze the essence of mathematics making the main\nemphasis on mathematics as an advanced system of knowledge. This knowledge\nconsists of structures and represents structures, existence of which depends on\nobservers in a nonstandard way. Structural nature of mathematics explains its\nreasonable effectiveness.\n",
"title": "Mathematical Knowledge and the Role of an Observer: Ontological and epistemological aspects"
}
| null | null | null | null | true | null |
1590
| null |
Default
| null | null |
null |
{
"abstract": " Technology is an extremely potent tool that can be leveraged for human\ndevelopment and social good. Owing to the great importance of environment and\nhuman psychology in driving human behavior, and the ubiquity of technology in\nmodern life, there is a need to leverage the insights and capabilities of both\nfields together for nudging people towards a behavior that is optimal in some\nsense (personal or social). In this regard, the field of persuasive technology,\nwhich proposes to infuse technology with appropriate design and incentives\nusing insights from psychology, behavioral economics, and human-computer\ninteraction holds a lot of promise. Whilst persuasive technology is already\nbeing developed and is at play in many commercial applications, it can have the\ngreat social impact in the field of Information and Communication Technology\nfor Development (ICTD) which uses Information and Communication Technology\n(ICT) for human developmental ends such as education and health. In this paper\nwe will explore what persuasive technology is and how it can be used for the\nends of human development. To develop the ideas in a concrete setting, we\npresent a case study outlining how persuasive technology can be used for human\ndevelopment in Pakistan, a developing South Asian country, that suffers from\nmany of the problems that plague typical developing country.\n",
"title": "Persuasive Technology For Human Development: Review and Case Study"
}
| null | null | null | null | true | null |
1591
| null |
Default
| null | null |
null |
{
"abstract": " The central aim in this paper is to address variable selection questions in\nnonlinear and nonparametric regression. Motivated by statistical genetics,\nwhere nonlinear interactions are of particular interest, we introduce a novel\nand interpretable way to summarize the relative importance of predictor\nvariables. Methodologically, we develop the \"RelATive cEntrality\" (RATE)\nmeasure to prioritize candidate genetic variants that are not just marginally\nimportant, but whose associations also stem from significant covarying\nrelationships with other variants in the data. We illustrate RATE through\nBayesian Gaussian process regression, but the methodological innovations apply\nto other \"black box\" methods. It is known that nonlinear models often exhibit\ngreater predictive accuracy than linear models, particularly for phenotypes\ngenerated by complex genetic architectures. With detailed simulations and two\nreal data association mapping studies, we show that applying RATE enables an\nexplanation for this improved performance.\n",
"title": "Variable Prioritization in Nonlinear Black Box Methods: A Genetic Association Case Study"
}
| null | null | null | null | true | null |
1592
| null |
Default
| null | null |
null |
{
"abstract": " Assessment of the motor activity of group-housed sows in commercial farms.\nThe objective of this study was to specify the level of motor activity of\npregnant sows housed in groups in different housing systems. Eleven commercial\nfarms were selected for this study. Four housing systems were represented:\nsmall groups of five to seven sows (SG), free access stalls (FS) with exercise\narea, electronic sow feeder with a stable group (ESFsta) or a dynamic group\n(ESFdyn). Ten sows in mid-gestation were observed in each farm. The\nobservations of motor activity were made for 6 hours at the first meal or at\nthe start of the feeding sequence, two consecutive days and at regular\nintervals of 4 minutes. The results show that the motor activity of\ngroup-housed sows depends on the housing system. The activity is higher with\nthe ESFdyn system (standing: 55.7%), sows are less active in the SG system\n(standing: 26.5%), and FS system is intermediate. The distance traveled by sows\nin ESF system is linked to a larger area available. Thus, sows travel an\naverage of 362 m $\\pm$ 167 m in the ESFdyn system with an average available\nsurface of 446 m${}^2$ whereas sows in small groups travel 50 m $\\pm$ 15 m for\n15 m${}^2$ available.\n",
"title": "Activit{é} motrice des truies en groupes dans les diff{é}rents syst{è}mes de logement"
}
| null | null | null | null | true | null |
1593
| null |
Default
| null | null |
null |
{
"abstract": " The origin of ultrahigh-energy cosmic rays (UHECRs) is a half-century old\nenigma (Linsley 1963). The mystery has been deepened by an intriguing\ncoincidence: over ten orders of magnitude in energy, the energy generation\nrates of UHECRs, PeV neutrinos, and isotropic sub-TeV gamma rays are\ncomparable, which hints at a grand-unified picture (Murase and Waxman 2016).\nHere we report that powerful black hole jets in aggregates of galaxies can\nsupply the common origin of all of these phenomena. Once accelerated by a jet,\nlow-energy cosmic rays confined in the radio lobe are adiabatically cooled;\nhigher-energy cosmic rays leaving the source interact with the magnetized\ncluster environment and produce neutrinos and gamma rays; the highest-energy\nparticles escape from the host cluster and contribute to the observed cosmic\nrays above 100 PeV. The model is consistent with the spectrum, composition, and\nisotropy of the observed UHECRs, and also explains the IceCube neutrinos and\nthe non-blazar component of the Fermi gamma-ray background, assuming a\nreasonable energy output from black hole jets in clusters.\n",
"title": "Linking High-Energy Cosmic Particles by Black-Hole Jets Embedded in Large-Scale Structures"
}
| null | null | null | null | true | null |
1594
| null |
Default
| null | null |
null |
{
"abstract": " It is widely recognized that citation counts for papers from different fields\ncannot be directly compared because different scientific fields adopt different\ncitation practices. Citation counts are also strongly biased by paper age since\nolder papers had more time to attract citations. Various procedures aim at\nsuppressing these biases and give rise to new normalized indicators, such as\nthe relative citation count. We use a large citation dataset from Microsoft\nAcademic Graph and a new statistical framework based on the Mahalanobis\ndistance to show that the rankings by well known indicators, including the\nrelative citation count and Google's PageRank score, are significantly biased\nby paper field and age. We propose a general normalization procedure motivated\nby the $z$-score which produces much less biased rankings when applied to\ncitation count and PageRank score.\n",
"title": "Quantifying and suppressing ranking bias in a large citation network"
}
| null | null |
[
"Computer Science",
"Physics",
"Statistics"
] | null | true | null |
1595
| null |
Validated
| null | null |
null |
{
"abstract": " Since their inception in the 1980's, regression trees have been one of the\nmore widely used non-parametric prediction methods. Tree-structured methods\nyield a histogram reconstruction of the regression surface, where the bins\ncorrespond to terminal nodes of recursive partitioning. Trees are powerful, yet\nsusceptible to over-fitting. Strategies against overfitting have traditionally\nrelied on pruning greedily grown trees. The Bayesian framework offers an\nalternative remedy against overfitting through priors. Roughly speaking, a good\nprior charges smaller trees where overfitting does not occur. While the\nconsistency of random histograms, trees and their ensembles has been studied\nquite extensively, the theoretical understanding of the Bayesian counterparts\nhas been missing. In this paper, we take a step towards understanding why/when\ndo Bayesian trees and their ensembles not overfit. To address this question, we\nstudy the speed at which the posterior concentrates around the true smooth\nregression function. We propose a spike-and-tree variant of the popular\nBayesian CART prior and establish new theoretical results showing that\nregression trees (and their ensembles) (a) are capable of recovering smooth\nregression surfaces, achieving optimal rates up to a log factor, (b) can adapt\nto the unknown level of smoothness and (c) can perform effective dimension\nreduction when p>n. These results provide a piece of missing theoretical\nevidence explaining why Bayesian trees (and additive variants thereof) have\nworked so well in practice.\n",
"title": "Posterior Concentration for Bayesian Regression Trees and Forests"
}
| null | null | null | null | true | null |
1596
| null |
Default
| null | null |
null |
{
"abstract": " Oral Disintegrating Tablets (ODTs) is a novel dosage form that can be\ndissolved on the tongue within 3min or less especially for geriatric and\npediatric patients. Current ODT formulation studies usually rely on the\npersonal experience of pharmaceutical experts and trial-and-error in the\nlaboratory, which is inefficient and time-consuming. The aim of current\nresearch was to establish the prediction model of ODT formulations with direct\ncompression process by Artificial Neural Network (ANN) and Deep Neural Network\n(DNN) techniques. 145 formulation data were extracted from Web of Science. All\ndata sets were divided into three parts: training set (105 data), validation\nset (20) and testing set (20). ANN and DNN were compared for the prediction of\nthe disintegrating time. The accuracy of the ANN model has reached 85.60%,\n80.00% and 75.00% on the training set, validation set and testing set\nrespectively, whereas that of the DNN model was 85.60%, 85.00% and 80.00%,\nrespectively. Compared with the ANN, DNN showed the better prediction for ODT\nformulations. It is the first time that deep neural network with the improved\ndataset selection algorithm is applied to formulation prediction on small data.\nThe proposed predictive approach could evaluate the critical parameters about\nquality control of formulation, and guide research and process development. The\nimplementation of this prediction model could effectively reduce drug product\ndevelopment timeline and material usage, and proactively facilitate the\ndevelopment of a robust drug product.\n",
"title": "Predicting Oral Disintegrating Tablet Formulations by Neural Network Techniques"
}
| null | null | null | null | true | null |
1597
| null |
Default
| null | null |
null |
{
"abstract": " Calcium imaging has emerged as a workhorse method in neuroscience to\ninvestigate patterns of neuronal activity. Instrumentation to acquire calcium\nimaging movies has rapidly progressed and has become standard across labs.\nStill, algorithms to automatically detect and extract activity signals from\ncalcium imaging movies are highly variable from~lab~to~lab and more advanced\nalgorithms are continuously being developed. Here we present HNCcorr, a novel\nalgorithm for cell identification in calcium imaging movies based on\ncombinatorial optimization. The algorithm identifies cells by finding distinct\ngroups of highly similar pixels in correlation space, where a pixel is\nrepresented by the vector of correlations to a set of other pixels. The HNCcorr\nalgorithm achieves the best known results for the cell identification benchmark\nof Neurofinder, and guarantees an optimal solution to the underlying\ndeterministic optimization model resulting in a transparent mapping from input\ndata to outcome.\n",
"title": "HNCcorr: A Novel Combinatorial Approach for Cell Identification in Calcium-Imaging Movies"
}
| null | null | null | null | true | null |
1598
| null |
Default
| null | null |
null |
{
"abstract": " Field-aligned currents in the Earth's magnetotail are traditionally\nassociated with transient plasma flows and strong plasma pressure gradients in\nthe near-Earth side. In this paper we demonstrate a new field-aligned current\nsystem present at the lunar orbit tail. Using magnetotail current sheet\nobservations by two ARTEMIS probes at $\\sim60 R_E$, we analyze statistically\nthe current sheet structure and current density distribution closest to the\nneutral sheet. For about half of our 130 current sheet crossings, the\nequatorial magnetic field component across-the tail (along the main, cross-tail\ncurrent) contributes significantly to the vertical pressure balance. This\nmagnetic field component peaks at the equator, near the cross-tail current\nmaximum. For those cases, a significant part of the tail current, having an\nintensity in the range 1-10nA/m$^2$, flows along the magnetic field lines (it\nis both field-aligned and cross-tail). We suggest that this current system\ndevelops in order to compensate the thermal pressure by particles that on its\nown is insufficient to fend off the lobe magnetic pressure.\n",
"title": "Intense cross-tail field-aligned currents in the plasma sheet at lunar distances"
}
| null | null |
[
"Physics"
] | null | true | null |
1599
| null |
Validated
| null | null |
null |
{
"abstract": " Theoretical predictions of pressure-induced phase transformations often\nbecome long-standing enigmas because of limitations of contemporary available\nexperimental possibilities. Hitherto the existence of a non-icosahedral boron\nallotrope has been one of them. Here we report on the first non-icosahedral\nboron allotrope, which we denoted as {\\zeta}-B, with the orthorhombic\n{\\alpha}-Ga-type structure (space group Cmce) synthesized in a diamond anvil\ncell at extreme high-pressure high-temperature conditions (115 GPa and 2100 K).\nThe structure of {\\zeta}-B was solved using single-crystal synchrotron X-ray\ndiffraction and its compressional behavior was studied in the range of very\nhigh pressures (115 GPa to 135 GPa). Experimental validation of theoretical\npredictions reveals the degree of our up-to-date comprehension of condensed\nmatter and promotes further development of the solid state physics and\nchemistry.\n",
"title": "First non-icosahedral boron allotrope synthesized at high pressure and high temperature"
}
| null | null | null | null | true | null |
1600
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.