text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " One of the popular approaches for low-rank tensor completion is to use the\nlatent trace norm regularization. However, most existing works in this\ndirection learn a sparse combination of tensors. In this work, we fill this gap\nby proposing a variant of the latent trace norm that helps in learning a\nnon-sparse combination of tensors. We develop a dual framework for solving the\nlow-rank tensor completion problem. We first show a novel characterization of\nthe dual solution space with an interesting factorization of the optimal\nsolution. Overall, the optimal solution is shown to lie on a Cartesian product\nof Riemannian manifolds. Furthermore, we exploit the versatile Riemannian\noptimization framework for proposing computationally efficient trust region\nalgorithm. The experiments illustrate the efficacy of the proposed algorithm on\nseveral real-world datasets across applications.\n",
"title": "A dual framework for low-rank tensor completion"
} | null | null | null | null | true | null | 301 | null | Default | null | null |
null | {
"abstract": " Nous tentons dans cet article de proposer une thèse cohérente concernant\nla formation de la notion d'involution dans le Brouillon Project de Desargues.\nPour cela, nous donnons une analyse détaillée des dix premières pages\ndudit Brouillon, comprenant les développements de cas particuliers qui aident\nà comprendre l'intention de Desargues. Nous mettons cette analyse en regard\nde la lecture qu'en fait Jean de Beaugrand et que l'on trouve dans les Advis\nCharitables.\nThe purpose of this article is to propose a coherent thesis on how Girard\nDesargues arrived at the notion of involution in his Brouillon Project of 1639.\nTo this purpose we give a detailed analysis of the ten first pages of the\nBrouillon, including developments of particular cases which help to understand\nthe goal of Desargues, as well as to clarify the links between the notion of\ninvolution and that of harmonic division. We compare the conclusions of this\nanalysis with the very critical reading Jean de Beaugrand made of the Brouillon\nProject in the Advis Charitables of 1640.\n",
"title": "La notion d'involution dans le Brouillon Project de Girard Desargues"
} | null | null | null | null | true | null | 302 | null | Default | null | null |
null | {
"abstract": " X-ray computed tomography (CT) using sparse projection views is a recent\napproach to reduce the radiation dose. However, due to the insufficient\nprojection views, an analytic reconstruction approach using the filtered back\nprojection (FBP) produces severe streaking artifacts. Recently, deep learning\napproaches using large receptive field neural networks such as U-Net have\ndemonstrated impressive performance for sparse- view CT reconstruction.\nHowever, theoretical justification is still lacking. Inspired by the recent\ntheory of deep convolutional framelets, the main goal of this paper is,\ntherefore, to reveal the limitation of U-Net and propose new multi-resolution\ndeep learning schemes. In particular, we show that the alternative U- Net\nvariants such as dual frame and the tight frame U-Nets satisfy the so-called\nframe condition which make them better for effective recovery of high frequency\nedges in sparse view- CT. Using extensive experiments with real patient data\nset, we demonstrate that the new network architectures provide better\nreconstruction performance.\n",
"title": "Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT"
} | null | null | null | null | true | null | 303 | null | Default | null | null |
null | {
"abstract": " A singular (or Hermann) foliation on a smooth manifold $M$ can be seen as a\nsubsheaf of the sheaf $\\mathfrak{X}$ of vector fields on $M$. We show that if\nthis singular foliation admits a resolution (in the sense of sheaves)\nconsisting of sections of a graded vector bundle of finite type, then one can\nlift the Lie bracket of vector fields to a Lie $\\infty$-algebroid structure on\nthis resolution, that we call a universal Lie $\\infty$-algebroid associated to\nthe foliation. The name is justified because it is isomorphic (up to homotopy)\nto any other Lie $\\infty$-algebroid structure built on any other resolution of\nthe given singular foliation.\n",
"title": "Lie $\\infty$-algebroids and singular foliations"
} | null | null | null | null | true | null | 304 | null | Default | null | null |
null | {
"abstract": " The Weyl semimetal phase is a recently discovered topological quantum state\nof matter characterized by the presence of topologically protected degeneracies\nnear the Fermi level. These degeneracies are the source of exotic phenomena,\nincluding the realization of chiral Weyl fermions as quasiparticles in the bulk\nand the formation of Fermi arc states on the surfaces. Here, we demonstrate\nthat these two key signatures show distinct evolutions with the bulk band\ntopology by performing angle-resolved photoemission spectroscopy, supported by\nfirst-principle calculations, on transition-metal monophosphides. While Weyl\nfermion quasiparticles exist only when the chemical potential is located\nbetween two saddle points of the Weyl cone features, the Fermi arc states\nextend in a larger energy scale and are robust across the bulk Lifshitz\ntransitions associated with the recombination of two non-trivial Fermi surfaces\nenclosing one Weyl point into a single trivial Fermi surface enclosing two Weyl\npoints of opposite chirality. Therefore, in some systems (e.g. NbP),\ntopological Fermi arc states are preserved even if Weyl fermion quasiparticles\nare absent in the bulk. Our findings not only provide insight into the\nrelationship between the exotic physical phenomena and the intrinsic bulk band\ntopology in Weyl semimetals, but also resolve the apparent puzzle of the\ndifferent magneto-transport properties observed in TaAs, TaP and NbP, where the\nFermi arc states are similar.\n",
"title": "Distinct evolutions of Weyl fermion quasiparticles and Fermi arcs with bulk band topology in Weyl semimetals"
} | null | null | null | null | true | null | 305 | null | Default | null | null |
null | {
"abstract": " A sequence of pathological changes takes place in Alzheimer's disease, which\ncan be assessed in vivo using various brain imaging methods. Currently, there\nis no appropriate statistical model available that can easily integrate\nmultiple imaging modalities, being able to utilize the additional information\nprovided from the combined data. We applied Gaussian graphical models (GGMs)\nfor analyzing the conditional dependency networks of multimodal neuroimaging\ndata and assessed alterations of the network structure in mild cognitive\nimpairment (MCI) and Alzheimer's dementia (AD) compared to cognitively healthy\ncontrols.\nData from N=667 subjects were obtained from the Alzheimer's Disease\nNeuroimaging Initiative. Mean amyloid load (AV45-PET), glucose metabolism\n(FDG-PET), and gray matter volume (MRI) was calculated for each brain region.\nSeparate GGMs were estimated using a Bayesian framework for the combined\nmultimodal data for each diagnostic category. Graph-theoretical statistics were\ncalculated to determine network alterations associated with disease severity.\nNetwork measures clustering coefficient, path length and small-world\ncoefficient were significantly altered across diagnostic groups, with a\nbiphasic u-shape trajectory, i.e. increased small-world coefficient in early\nMCI, intermediate values in late MCI, and decreased values in AD patients\ncompared to controls. In contrast, no group differences were found for\nclustering coefficient and small-world coefficient when estimating conditional\ndependency networks on single imaging modalities.\nGGMs provide a useful methodology to analyze the conditional dependency\nnetworks of multimodal neuroimaging data.\n",
"title": "Assessing inter-modal and inter-regional dependencies in prodromal Alzheimer's disease using multimodal MRI/PET and Gaussian graphical models"
} | null | null | null | null | true | null | 306 | null | Default | null | null |
null | {
"abstract": " This work bridges the technical concepts underlying distributed computing and\nblockchain technologies with their profound socioeconomic and sociopolitical\nimplications, particularly on academic research and the healthcare industry.\nSeveral examples from academia, industry, and healthcare are explored\nthroughout this paper. The limiting factor in contemporary life sciences\nresearch is often funding: for example, to purchase expensive laboratory\nequipment and materials, to hire skilled researchers and technicians, and to\nacquire and disseminate data through established academic channels. In the case\nof the U.S. healthcare system, hospitals generate massive amounts of data, only\na small minority of which is utilized to inform current and future medical\npractice. Similarly, corporations too expend large amounts of money to collect,\nsecure and transmit data from one centralized source to another. In all three\nscenarios, data moves under the traditional paradigm of centralization, in\nwhich data is hosted and curated by individuals and organizations and of\nbenefit to only a small subset of people.\n",
"title": "On the economics of knowledge creation and sharing"
} | null | null | null | null | true | null | 307 | null | Default | null | null |
null | {
"abstract": " Fragility curves are commonly used in civil engineering to assess the\nvulnerability of structures to earthquakes. The probability of failure\nassociated with a prescribed criterion (e.g. the maximal inter-storey drift of\na building exceeding a certain threshold) is represented as a function of the\nintensity of the earthquake ground motion (e.g. peak ground acceleration or\nspectral acceleration). The classical approach relies on assuming a lognormal\nshape of the fragility curves; it is thus parametric. In this paper, we\nintroduce two non-parametric approaches to establish the fragility curves\nwithout employing the above assumption, namely binned Monte Carlo simulation\nand kernel density estimation. As an illustration, we compute the fragility\ncurves for a three-storey steel frame using a large number of synthetic ground\nmotions. The curves obtained with the non-parametric approaches are compared\nwith respective curves based on the lognormal assumption. A similar comparison\nis presented for a case when a limited number of recorded ground motions is\navailable. It is found that the accuracy of the lognormal curves depends on the\nground motion intensity measure, the failure criterion and most importantly, on\nthe employed method for estimating the parameters of the lognormal shape.\n",
"title": "Seismic fragility curves for structures using non-parametric representations"
} | null | null | null | null | true | null | 308 | null | Default | null | null |
null | {
"abstract": " We consider continuous-time Markov chains which display a family of wells at\nthe same depth. We provide sufficient conditions which entail the convergence\nof the finite-dimensional distributions of the order parameter to the ones of a\nfinite state Markov chain. We also show that the state of the process can be\nrepresented as a time-dependent convex combination of metastable states, each\nof which is supported on one well.\n",
"title": "Metastable Markov chains: from the convergence of the trace to the convergence of the finite-dimensional distributions"
} | null | null | null | null | true | null | 309 | null | Default | null | null |
null | {
"abstract": " We construct embedded minimal surfaces which are $n$-periodic in\n$\\mathbb{R}^n$. They are new for codimension $n-2\\ge 2$. We start with a Jordan\ncurve of edges of the $n$-dimensional cube. It bounds a Plateau minimal disk\nwhich Schwarz reflection extends to a complete minimal surface. Studying the\ngroup of Schwarz reflections, we can characterize those Jordan curves for which\nthe complete surface is embedded. For example, for $n=4$ exactly five such\nJordan curves generate embedded surfaces. Our results apply to surface classes\nother than minimal as well, for instance polygonal surfaces.\n",
"title": "Construction of embedded periodic surfaces in $\\mathbb{R}^n$"
} | null | null | null | null | true | null | 310 | null | Default | null | null |
null | {
"abstract": " We report the discovery of three small transiting planets orbiting GJ 9827, a\nbright (K = 7.2) nearby late K-type dwarf star. GJ 9827 hosts a $1.62\\pm0.11$\n$R_{\\rm \\oplus}$ super Earth on a 1.2 day period, a $1.269^{+0.087}_{-0.089}$\n$R_{\\rm \\oplus}$ super Earth on a 3.6 day period, and a $2.07\\pm0.14$ $R_{\\rm\n\\oplus}$ super Earth on a 6.2 day period. The radii of the planets transiting\nGJ 9827 span the transition between predominantly rocky and gaseous planets,\nand GJ 9827 b and c fall in or close to the known gap in the radius\ndistribution of small planets between these populations. At a distance of 30\nparsecs, GJ 9827 is the closest exoplanet host discovered by K2 to date, making\nthese planets well-suited for atmospheric studies with the upcoming James Webb\nSpace Telescope. The GJ 9827 system provides a valuable opportunity to\ncharacterize interior structure and atmospheric properties of coeval planets\nspanning the rocky to gaseous transition.\n",
"title": "A System of Three Super Earths Transiting the Late K-Dwarf GJ 9827 at Thirty Parsecs"
} | null | null | null | null | true | null | 311 | null | Default | null | null |
null | {
"abstract": " We define a family of quantum invariants of closed oriented $3$-manifolds\nusing spherical multi-fusion categories. The state sum nature of this invariant\nleads directly to $(2+1)$-dimensional topological quantum field theories\n($\\text{TQFT}$s), which generalize the Turaev-Viro-Barrett-Westbury\n($\\text{TVBW}$) $\\text{TQFT}$s from spherical fusion categories. The invariant\nis given as a state sum over labeled triangulations, which is mostly parallel\nto, but richer than the $\\text{TVBW}$ approach in that here the labels live not\nonly on $1$-simplices but also on $0$-simplices. It is shown that a\nmulti-fusion category in general cannot be a spherical fusion category in the\nusual sense. Thus we introduce the concept of a spherical multi-fusion category\nby imposing a weakened version of sphericity. Besides containing the\n$\\text{TVBW}$ theory, our construction also includes the recent higher gauge\ntheory $(2+1)$-$\\text{TQFT}$s given by Kapustin and Thorngren, which was not\nknown to have a categorical origin before.\n",
"title": "State Sum Invariants of Three Manifolds from Spherical Multi-fusion Categories"
} | null | null | null | null | true | null | 312 | null | Default | null | null |
null | {
"abstract": " We propose Sparse Neural Network architectures that are based on random or\nstructured bipartite graph topologies. Sparse architectures provide compression\nof the models learned and speed-ups of computations, they can also surpass\ntheir unstructured or fully connected counterparts. As we show, even more\ncompact topologies of the so-called SNN (Sparse Neural Network) can be achieved\nwith the use of structured graphs of connections between consecutive layers of\nneurons. In this paper, we investigate how the accuracy and training speed of\nthe models depend on the topology and sparsity of the neural network. Previous\napproaches using sparcity are all based on fully connected neural network\nmodels and create sparcity during training phase, instead we explicitly define\na sparse architectures of connections before the training. Building compact\nneural network models is coherent with empirical observations showing that\nthere is much redundancy in learned neural network models. We show\nexperimentally that the accuracy of the models learned with neural networks\ndepends on expander-like properties of the underlying topologies such as the\nspectral gap and algebraic connectivity rather than the density of the graphs\nof connections.\n",
"title": "Sparse Neural Networks Topologies"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 313 | null | Validated | null | null |
null | {
"abstract": " Computer vision has made remarkable progress in recent years. Deep neural\nnetwork (DNN) models optimized to identify objects in images exhibit\nunprecedented task-trained accuracy and, remarkably, some generalization\nability: new visual problems can now be solved more easily based on previous\nlearning. Biological vision (learned in life and through evolution) is also\naccurate and general-purpose. Is it possible that these different learning\nregimes converge to similar problem-dependent optimal computations? We\ntherefore asked whether the human system-level computation of visual perception\nhas DNN correlates and considered several anecdotal test cases. We found that\nperceptual sensitivity to image changes has DNN mid-computation correlates,\nwhile sensitivity to segmentation, crowding and shape has DNN end-computation\ncorrelates. Our results quantify the applicability of using DNN computation to\nestimate perceptual loss, and are consistent with the fascinating theoretical\nview that properties of human perception are a consequence of\narchitecture-independent visual learning.\n",
"title": "Human perception in computer vision"
} | null | null | null | null | true | null | 314 | null | Default | null | null |
null | {
"abstract": " A numerical analysis of heat conduction through the cover plate of a heat\npipe is carried out to determine the temperature of the working substance,\naverage temperature of heating and cooling surfaces, heat spread in the\ntransmitter, and the heat bypass through the cover plate. Analysis has been\nextended for the estimation of heat transfer requirements at the outer surface\nof the con- denser under different heat load conditions using Genetic\nAlgorithm. This paper also presents the estimation of an average heat transfer\ncoefficient for the boiling and condensation of the working substance inside\nthe microgrooves corresponding to a known temperature of the heat source. The\nequation of motion of the working fluid in the meniscus of an equilateral\ntriangular groove has been presented from which a new term called the minimum\nsurface tension required for avoiding the dry out condition is defined.\nQuantitative results showing the effect of thickness of cover plate, heat load,\nangle of inclination and viscosity of the working fluid on the different\naspects of the heat transfer, minimum surface tension required to avoid dry\nout, velocity distribution of the liquid, and radius of liquid meniscus inside\nthe micro-grooves have been presented and discussed.\n",
"title": "Analyses and estimation of certain design parameters of micro-grooved heat pipes"
} | null | null | [
"Physics"
]
| null | true | null | 315 | null | Validated | null | null |
null | {
"abstract": " This paper provides short proofs of two fundamental theorems of finite\nsemigroup theory whose previous proofs were significantly longer, namely the\ntwo-sided Krohn-Rhodes decomposition theorem and Henckell's aperiodic pointlike\ntheorem, using a new algebraic technique that we call the merge decomposition.\nA prototypical application of this technique decomposes a semigroup $T$ into a\ntwo-sided semidirect product whose components are built from two subsemigroups\n$T_1,T_2$, which together generate $T$, and the subsemigroup generated by their\nsetwise product $T_1T_2$. In this sense we decompose $T$ by merging the\nsubsemigroups $T_1$ and $T_2$. More generally, our technique merges semigroup\nhomomorphisms from free semigroups.\n",
"title": "Merge decompositions, two-sided Krohn-Rhodes, and aperiodic pointlikes"
} | null | null | null | null | true | null | 316 | null | Default | null | null |
null | {
"abstract": " Nefarious actors on social media and other platforms often spread rumors and\nfalsehoods through images whose metadata (e.g., captions) have been modified to\nprovide visual substantiation of the rumor/falsehood. This type of modification\nis referred to as image repurposing, in which often an unmanipulated image is\npublished along with incorrect or manipulated metadata to serve the actor's\nulterior motives. We present the Multimodal Entity Image Repurposing (MEIR)\ndataset, a substantially challenging dataset over that which has been\npreviously available to support research into image repurposing detection. The\nnew dataset includes location, person, and organization manipulations on\nreal-world data sourced from Flickr. We also present a novel, end-to-end, deep\nmultimodal learning model for assessing the integrity of an image by combining\ninformation extracted from the image with related information from a knowledge\nbase. The proposed method is compared against state-of-the-art techniques on\nexisting datasets as well as MEIR, where it outperforms existing methods across\nthe board, with AUC improvement up to 0.23.\n",
"title": "Deep Multimodal Image-Repurposing Detection"
} | null | null | null | null | true | null | 317 | null | Default | null | null |
null | {
"abstract": " Recent advances in adversarial Deep Learning (DL) have opened up a largely\nunexplored surface for malicious attacks jeopardizing the integrity of\nautonomous DL systems. With the wide-spread usage of DL in critical and\ntime-sensitive applications, including unmanned vehicles, drones, and video\nsurveillance systems, online detection of malicious inputs is of utmost\nimportance. We propose DeepFense, the first end-to-end automated framework that\nsimultaneously enables efficient and safe execution of DL models. DeepFense\nformalizes the goal of thwarting adversarial attacks as an optimization problem\nthat minimizes the rarely observed regions in the latent feature space spanned\nby a DL network. To solve the aforementioned minimization problem, a set of\ncomplementary but disjoint modular redundancies are trained to validate the\nlegitimacy of the input samples in parallel with the victim DL model. DeepFense\nleverages hardware/software/algorithm co-design and customized acceleration to\nachieve just-in-time performance in resource-constrained settings. The proposed\ncountermeasure is unsupervised, meaning that no adversarial sample is leveraged\nto train modular redundancies. We further provide an accompanying API to reduce\nthe non-recurring engineering cost and ensure automated adaptation to various\nplatforms. Extensive evaluations on FPGAs and GPUs demonstrate up to two orders\nof magnitude performance improvement while enabling online adversarial sample\ndetection.\n",
"title": "DeepFense: Online Accelerated Defense Against Adversarial Deep Learning"
} | null | null | null | null | true | null | 318 | null | Default | null | null |
null | {
"abstract": " Users form information trails as they browse the web, checkin with a\ngeolocation, rate items, or consume media. A common problem is to predict what\na user might do next for the purposes of guidance, recommendation, or\nprefetching. First-order and higher-order Markov chains have been widely used\nmethods to study such sequences of data. First-order Markov chains are easy to\nestimate, but lack accuracy when history matters. Higher-order Markov chains,\nin contrast, have too many parameters and suffer from overfitting the training\ndata. Fitting these parameters with regularization and smoothing only offers\nmild improvements. In this paper we propose the retrospective higher-order\nMarkov process (RHOMP) as a low-parameter model for such sequences. This model\nis a special case of a higher-order Markov chain where the transitions depend\nretrospectively on a single history state instead of an arbitrary combination\nof history states. There are two immediate computational advantages: the number\nof parameters is linear in the order of the Markov chain and the model can be\nfit to large state spaces. Furthermore, by providing a specific structure to\nthe higher-order chain, RHOMPs improve the model accuracy by efficiently\nutilizing history states without risks of overfitting the data. We demonstrate\nhow to estimate a RHOMP from data and we demonstrate the effectiveness of our\nmethod on various real application datasets spanning geolocation data, review\nsequences, and business locations. The RHOMP model uniformly outperforms\nhigher-order Markov chains, Kneser-Ney regularization, and tensor\nfactorizations in terms of prediction accuracy.\n",
"title": "Retrospective Higher-Order Markov Processes for User Trails"
} | null | null | null | null | true | null | 319 | null | Default | null | null |
null | {
"abstract": " We analyze two novel randomized variants of the Frank-Wolfe (FW) or\nconditional gradient algorithm. While classical FW algorithms require solving a\nlinear minimization problem over the domain at each iteration, the proposed\nmethod only requires to solve a linear minimization problem over a small\n\\emph{subset} of the original domain. The first algorithm that we propose is a\nrandomized variant of the original FW algorithm and achieves a\n$\\mathcal{O}(1/t)$ sublinear convergence rate as in the deterministic\ncounterpart. The second algorithm is a randomized variant of the Away-step FW\nalgorithm, and again as its deterministic counterpart, reaches linear (i.e.,\nexponential) convergence rate making it the first provably convergent\nrandomized variant of Away-step FW. In both cases, while subsampling reduces\nthe convergence rate by a constant factor, the linear minimization step can be\na fraction of the cost of that of the deterministic versions, especially when\nthe data is streamed. We illustrate computational gains of the algorithms on\nregression problems, involving both $\\ell_1$ and latent group lasso penalties.\n",
"title": "Frank-Wolfe with Subsampling Oracle"
} | null | null | null | null | true | null | 320 | null | Default | null | null |
null | {
"abstract": " In this paper, we prove a mean value formula for bounded subharmonic\nHermitian matrix valued function on a complete Riemannian manifold with\nnonnegative Ricci curvature. As its application, we obtain a Liouville type\ntheorem for the complex Monge-Ampère equation on product manifolds.\n",
"title": "A mean value formula and a Liouville theorem for the complex Monge-Ampère equation"
} | null | null | null | null | true | null | 321 | null | Default | null | null |
null | {
"abstract": " In this work, we investigate the value of uncertainty modeling in 3D\nsuper-resolution with convolutional neural networks (CNNs). Deep learning has\nshown success in a plethora of medical image transformation problems, such as\nsuper-resolution (SR) and image synthesis. However, the highly ill-posed nature\nof such problems results in inevitable ambiguity in the learning of networks.\nWe propose to account for intrinsic uncertainty through a per-patch\nheteroscedastic noise model and for parameter uncertainty through approximate\nBayesian inference in the form of variational dropout. We show that the\ncombined benefits of both lead to the state-of-the-art performance SR of\ndiffusion MR brain images in terms of errors compared to ground truth. We\nfurther show that the reduced error scores produce tangible benefits in\ndownstream tractography. In addition, the probabilistic nature of the methods\nnaturally confers a mechanism to quantify uncertainty over the super-resolved\noutput. We demonstrate through experiments on both healthy and pathological\nbrains the potential utility of such an uncertainty measure in the risk\nassessment of the super-resolved images for subsequent clinical use.\n",
"title": "Bayesian Image Quality Transfer with CNNs: Exploring Uncertainty in dMRI Super-Resolution"
} | null | null | [
"Computer Science"
]
| null | true | null | 322 | null | Validated | null | null |
null | {
"abstract": " Superconductor-Ferromagnet (SF) heterostructures are of interest due to\nnumerous phenomena related to the spin-dependent interaction of Cooper pairs\nwith the magnetization. Here we address the effects of a magnetic insulator on\nthe density of states of a superconductor based on a recently developed\nboundary condition for strongly spin-dependent interfaces. We show that the\nboundary to a magnetic insulator has a similar effect like the presence of\nmagnetic impurities. In particular we find that the impurity effects of\nstrongly scattering localized spins leading to the formation of Shiba bands can\nbe mapped onto the boundary problem.\n",
"title": "Yu-Shiba-Rusinov bands in superconductors in contact with a magnetic insulator"
} | null | null | [
"Physics"
]
| null | true | null | 323 | null | Validated | null | null |
null | {
"abstract": " We present the first general purpose framework for marginal maximum a\nposteriori estimation of probabilistic program variables. By using a series of\ncode transformations, the evidence of any probabilistic program, and therefore\nof any graphical model, can be optimized with respect to an arbitrary subset of\nits sampled variables. To carry out this optimization, we develop the first\nBayesian optimization package to directly exploit the source code of its\ntarget, leading to innovations in problem-independent hyperpriors, unbounded\noptimization, and implicit constraint satisfaction; delivering significant\nperformance improvements over prominent existing packages. We present\napplications of our method to a number of tasks including engineering design\nand parameter optimization.\n",
"title": "Bayesian Optimization for Probabilistic Programs"
} | null | null | null | null | true | null | 324 | null | Default | null | null |
null | {
"abstract": " Long-term load forecasting plays a vital role for utilities and planners in\nterms of grid development and expansion planning. An overestimate of long-term\nelectricity load will result in substantial wasted investment in the\nconstruction of excess power facilities, while an underestimate of future load\nwill result in insufficient generation and unmet demand. This paper presents\nfirst-of-its-kind approach to use multiplicative error model (MEM) in\nforecasting load for long-term horizon. MEM originates from the structure of\nautoregressive conditional heteroscedasticity (ARCH) model where conditional\nvariance is dynamically parameterized and it multiplicatively interacts with an\ninnovation term of time-series. Historical load data, accessed from a U.S.\nregional transmission operator, and recession data for years 1993-2016 is used\nin this study. The superiority of considering volatility is proven by\nout-of-sample forecast results as well as directional accuracy during the great\neconomic recession of 2008. To incorporate future volatility, backtesting of\nMEM model is performed. Two performance indicators used to assess the proposed\nmodel are mean absolute percentage error (for both in-sample model fit and\nout-of-sample forecasts) and directional accuracy.\n",
"title": "Long-Term Load Forecasting Considering Volatility Using Multiplicative Error Model"
} | null | null | null | null | true | null | 325 | null | Default | null | null |
null | {
"abstract": " Many empirical studies document power law behavior in size distributions of\neconomic interest such as cities, firms, income, and wealth. One mechanism for\ngenerating such behavior combines independent and identically distributed\nGaussian additive shocks to log-size with a geometric age distribution. We\ngeneralize this mechanism by allowing the shocks to be non-Gaussian (but\nlight-tailed) and dependent upon a Markov state variable. Our main results\nprovide sharp bounds on tail probabilities and simple formulas for Pareto\nexponents. We present two applications: (i) we show that the tails of the\nwealth distribution in a heterogeneous-agent dynamic general equilibrium model\nwith idiosyncratic endowment risk decay exponentially, unlike models with\ninvestment risk where the tails may be Paretian, and (ii) we show that a random\ngrowth model for the population dynamics of Japanese prefectures is consistent\nwith the observed Pareto exponent but only after allowing for Markovian\ndynamics.\n",
"title": "Geometrically stopped Markovian random growth processes and Pareto tails"
} | null | null | [
"Mathematics"
]
| null | true | null | 326 | null | Validated | null | null |
null | {
"abstract": " In topological quantum computing, information is encoded in \"knotted\" quantum\nstates of topological phases of matter, thus being locked into topology to\nprevent decay. Topological precision has been confirmed in quantum Hall liquids\nby experiments to an accuracy of $10^{-10}$, and harnessed to stabilize quantum\nmemory. In this survey, we discuss the conceptual development of this\ninterdisciplinary field at the juncture of mathematics, physics and computer\nscience. Our focus is on computing and physical motivations, basic mathematical\nnotions and results, open problems and future directions related to and/or\ninspired by topological quantum computing.\n",
"title": "Mathematics of Topological Quantum Computing"
} | null | null | [
"Physics",
"Mathematics"
]
| null | true | null | 327 | null | Validated | null | null |
null | {
"abstract": " $ \\def\\vecc#1{\\boldsymbol{#1}} $We design a polynomial time algorithm that\nfor any weighted undirected graph $G = (V, E,\\vecc w)$ and sufficiently large\n$\\delta > 1$, partitions $V$ into subsets $V_1, \\ldots, V_h$ for some $h\\geq\n1$, such that\n$\\bullet$ at most $\\delta^{-1}$ fraction of the weights are between clusters,\ni.e. \\[ w(E - \\cup_{i = 1}^h E(V_i)) \\lesssim \\frac{w(E)}{\\delta};\\]\n$\\bullet$ the effective resistance diameter of each of the induced subgraphs\n$G[V_i]$ is at most $\\delta^3$ times the average weighted degree, i.e. \\[\n\\max_{u, v \\in V_i} \\mathsf{Reff}_{G[V_i]}(u, v) \\lesssim \\delta^3 \\cdot\n\\frac{|V|}{w(E)} \\quad \\text{ for all } i=1, \\ldots, h.\\]\nIn particular, it is possible to remove one percent of weight of edges of any\ngiven graph such that each of the resulting connected components has effective\nresistance diameter at most the inverse of the average weighted degree.\nOur proof is based on a new connection between effective resistance and low\nconductance sets. We show that if the effective resistance between two vertices\n$u$ and $v$ is large, then there must be a low conductance cut separating $u$\nfrom $v$. This implies that very mildly expanding graphs have constant\neffective resistance diameter. We believe that this connection could be of\nindependent interest in algorithm design.\n",
"title": "Graph Clustering using Effective Resistance"
} | null | null | null | null | true | null | 328 | null | Default | null | null |
null | {
"abstract": " Self-supervised learning (SSL) is a reliable learning mechanism in which a\nrobot enhances its perceptual capabilities. Typically, in SSL a trusted,\nprimary sensor cue provides supervised training data to a secondary sensor cue.\nIn this article, a theoretical analysis is performed on the fusion of the\nprimary and secondary cue in a minimal model of SSL. A proof is provided that\ndetermines the specific conditions under which it is favorable to perform\nfusion. In short, it is favorable when (i) the prior on the target value is\nstrong or (ii) the secondary cue is sufficiently accurate. The theoretical\nfindings are validated with computational experiments. Subsequently, a\nreal-world case study is performed to investigate if fusion in SSL is also\nbeneficial when assumptions of the minimal model are not met. In particular, a\nflying robot learns to map pressure measurements to sonar height measurements\nand then fuses the two, resulting in better height estimation. Fusion is also\nbeneficial in the opposite case, when pressure is the primary cue. The analysis\nand results are encouraging to study SSL fusion also for other robots and\nsensors.\n",
"title": "Self-supervised learning: When is fusion of the primary and secondary sensor cue useful?"
} | null | null | null | null | true | null | 329 | null | Default | null | null |
null | {
"abstract": " Deep reinforcement learning on Atari games maps pixel directly to actions;\ninternally, the deep neural network bears the responsibility of both extracting\nuseful information and making decisions based on it. Aiming at devoting entire\ndeep networks to decision making alone, we propose a new method for learning\npolicies and compact state representations separately but simultaneously for\npolicy approximation in reinforcement learning. State representations are\ngenerated by a novel algorithm based on Vector Quantization and Sparse Coding,\ntrained online along with the network, and capable of growing its dictionary\nsize over time. We also introduce new techniques allowing both the neural\nnetwork and the evolution strategy to cope with varying dimensions. This\nenables networks of only 6 to 18 neurons to learn to play a selection of Atari\ngames with performance comparable---and occasionally superior---to\nstate-of-the-art techniques using evolution strategies on deep networks two\norders of magnitude larger.\n",
"title": "Playing Atari with Six Neurons"
} | null | null | null | null | true | null | 330 | null | Default | null | null |
null | {
"abstract": " We consider the problem of isotonic regression, where the underlying signal\n$x$ is assumed to satisfy a monotonicity constraint, that is, $x$ lies in the\ncone $\\{ x\\in\\mathbb{R}^n : x_1 \\leq \\dots \\leq x_n\\}$. We study the isotonic\nprojection operator (projection to this cone), and find a necessary and\nsufficient condition characterizing all norms with respect to which this\nprojection is contractive. This enables a simple and non-asymptotic analysis of\nthe convergence properties of isotonic regression, yielding uniform confidence\nbands that adapt to the local Lipschitz properties of the signal.\n",
"title": "Contraction and uniform convergence of isotonic regression"
} | null | null | null | null | true | null | 331 | null | Default | null | null |
null | {
"abstract": " Heating, Ventilation, and Cooling (HVAC) systems are often the most\nsignificant contributor to the energy usage, and the operational cost, of large\noffice buildings. Therefore, to understand the various factors affecting the\nenergy usage, and to optimize the operational efficiency of building HVAC\nsystems, energy analysts and architects often create simulations (e.g.,\nEnergyPlus or DOE-2), of buildings prior to construction or renovation to\ndetermine energy savings and quantify the Return-on-Investment (ROI). While\nuseful, these simulations usually use static HVAC control strategies such as\nlowering room temperature at night, or reactive control based on simulated room\noccupancy. Recently, advances have been made in HVAC control algorithms that\npredict room occupancy. However, these algorithms depend on costly sensor\ninstallations and the tradeoffs between predictive accuracy, energy savings,\ncomfort and expenses are not well understood. Current simulation frameworks do\nnot support easy analysis of these tradeoffs. Our contribution is a simulation\nframework that can be used to explore this design space by generating objective\nestimates of the energy savings and occupant comfort for different levels of\nHVAC prediction and control performance. We validate our framework on a\nreal-world occupancy dataset spanning 6 months for 235 rooms in a large\nuniversity office building. Using the gold standard of energy use modeling and\nsimulation (Revit and Energy Plus), we compare the energy consumption and\noccupant comfort in 29 independent simulations that explore our parameter\nspace. Our results highlight a number of potentially useful tradeoffs with\nrespect to energy savings, comfort, and algorithmic performance among\npredictive, reactive, and static schedules, for a stakeholder of our building.\n",
"title": "A Systematic Approach for Exploring Tradeoffs in Predictive HVAC Control Systems for Buildings"
} | null | null | null | null | true | null | 332 | null | Default | null | null |
null | {
"abstract": " In this paper, we consider the problem of identifying the type (local\nminimizer, maximizer or saddle point) of a given isolated real critical point\n$c$, which is degenerate, of a multivariate polynomial function $f$. To this\nend, we introduce the definition of faithful radius of $c$ by means of the\ncurve of tangency of $f$. We show that the type of $c$ can be determined by the\nglobal extrema of $f$ over the Euclidean ball centered at $c$ with a faithful\nradius.We propose algorithms to compute a faithful radius of $c$ and determine\nits type.\n",
"title": "On types of degenerate critical points of real polynomial functions"
} | null | null | null | null | true | null | 333 | null | Default | null | null |
null | {
"abstract": " Identification of patients at high risk for readmission could help reduce\nmorbidity and mortality as well as healthcare costs. Most of the existing\nstudies on readmission prediction did not compare the contribution of data\ncategories. In this study we analyzed relative contribution of 90,101 variables\nacross 398,884 admission records corresponding to 163,468 patients, including\npatient demographics, historical hospitalization information, discharge\ndisposition, diagnoses, procedures, medications and laboratory test results. We\nestablished an interpretable readmission prediction model based on Logistic\nRegression in scikit-learn, and added the available variables to the model one\nby one in order to analyze the influences of individual data categories on\nreadmission prediction accuracy. Diagnosis related groups (c-statistic\nincrement of 0.0933) and discharge disposition (c-statistic increment of\n0.0269) were the strongest contributors to model accuracy. Additionally, we\nalso identified the top ten contributing variables in every data category.\n",
"title": "Contribution of Data Categories to Readmission Prediction Accuracy"
} | null | null | null | null | true | null | 334 | null | Default | null | null |
null | {
"abstract": " Tropical recurrent sequences are introduced satisfying a given vector (being\na tropical counterpart of classical linear recurrent sequences). We consider\nthe case when Newton polygon of the vector has a single (bounded) edge. In this\ncase there are periodic tropical recurrent sequences which are similar to\nclassical linear recurrent sequences. A question is studied when there exists a\nnon-periodic tropical recurrent sequence satisfying a given vector, and partial\nanswers are provided to this question. Also an algorithm is designed which\ntests existence of non-periodic tropical recurrent sequences satisfying a given\nvector with integer coordinates. Finally, we introduce a tropical entropy of a\nvector and provide some bounds on it.\n",
"title": "Tropical recurrent sequences"
} | null | null | null | null | true | null | 335 | null | Default | null | null |
null | {
"abstract": " An interesting approach to analyzing neural networks that has received\nrenewed attention is to examine the equivalent kernel of the neural network.\nThis is based on the fact that a fully connected feedforward network with one\nhidden layer, a certain weight distribution, an activation function, and an\ninfinite number of neurons can be viewed as a mapping into a Hilbert space. We\nderive the equivalent kernels of MLPs with ReLU or Leaky ReLU activations for\nall rotationally-invariant weight distributions, generalizing a previous result\nthat required Gaussian weight distributions. Additionally, the Central Limit\nTheorem is used to show that for certain activation functions, kernels\ncorresponding to layers with weight distributions having $0$ mean and finite\nabsolute third moment are asymptotically universal, and are well approximated\nby the kernel corresponding to layers with spherical Gaussian weights. In deep\nnetworks, as depth increases the equivalent kernel approaches a pathological\nfixed point, which can be used to argue why training randomly initialized\nnetworks can be difficult. Our results also have implications for weight\ninitialization.\n",
"title": "Invariance of Weight Distributions in Rectified MLPs"
} | null | null | null | null | true | null | 336 | null | Default | null | null |
null | {
"abstract": " We explore whether useful temporal neural generative models can be learned\nfrom sequential data without back-propagation through time. We investigate the\nviability of a more neurocognitively-grounded approach in the context of\nunsupervised generative modeling of sequences. Specifically, we build on the\nconcept of predictive coding, which has gained influence in cognitive science,\nin a neural framework. To do so we develop a novel architecture, the Temporal\nNeural Coding Network, and its learning algorithm, Discrepancy Reduction. The\nunderlying directed generative model is fully recurrent, meaning that it\nemploys structural feedback connections and temporal feedback connections,\nyielding information propagation cycles that create local learning signals.\nThis facilitates a unified bottom-up and top-down approach for information\ntransfer inside the architecture. Our proposed algorithm shows promise on the\nbouncing balls generative modeling problem. Further experiments could be\nconducted to explore the strengths and weaknesses of our approach.\n",
"title": "Learning to Adapt by Minimizing Discrepancy"
} | null | null | null | null | true | null | 337 | null | Default | null | null |
null | {
"abstract": " Kitaev quantum spin liquid is a topological magnetic quantum state\ncharacterized by Majorana fermions of fractionalized spin excitations, which\nare identical to their own antiparticles. Here, we demonstrate emergence of\nMajorana fermions thermally fractionalized in the Kitaev honeycomb spin lattice\n{\\alpha}-RuCl3. The specific heat data unveil the characteristic two-stage\nrelease of magnetic entropy involving localized and itinerant Majorana\nfermions. The inelastic neutron scattering results further corroborate these\ntwo distinct fermions by exhibiting quasielastic excitations at low energies\naround the Brillouin zone center and Y-shaped magnetic continuum at high\nenergies, which are evident for the ferromagnetic Kitaev model. Our results\nprovide an opportunity to build a unified conceptual framework of\nfractionalized excitations, applicable also for the quantum Hall states,\nsuperconductors, and frustrated magnets.\n",
"title": "Incarnation of Majorana Fermions in Kitaev Quantum Spin Lattice"
} | null | null | null | null | true | null | 338 | null | Default | null | null |
null | {
"abstract": " Identifying the mechanism by which high energy Lyman continuum (LyC) photons\nescaped from early galaxies is one of the most pressing questions in cosmic\nevolution. Haro 11 is the best known local LyC leaking galaxy, providing an\nimportant opportunity to test our understanding of LyC escape. The observed LyC\nemission in this galaxy presumably originates from one of the three bright,\nphotoionizing knots known as A, B, and C. It is known that Knot C has strong\nLy$\\alpha$ emission, and Knot B hosts an unusually bright ultraluminous X-ray\nsource, which may be a low-luminosity AGN. To clarify the LyC source, we carry\nout ionization-parameter mapping (IPM) by obtaining narrow-band imaging from\nthe Hubble Space Telescope WFC3 and ACS cameras to construct spatially resolved\nratio maps of [OIII]/[OII] emission from the galaxy. IPM traces the ionization\nstructure of the interstellar medium and allows us to identify optically thin\nregions. To optimize the continuum subtraction, we introduce a new method for\ndetermining the best continuum scale factor derived from the mode of the\ncontinuum-subtracted, image flux distribution. We find no conclusive evidence\nof LyC escape from Knots B or C, but instead, we identify a high-ionization\nregion extending over at least 1 kpc from Knot A. Knot A shows evidence of an\nextremely young age ($\\lesssim 1$ Myr), perhaps containing very massive stars\n($>100$ M$_\\odot$). It is weak in Ly$\\alpha$, so if it is confirmed as the LyC\nsource, our results imply that LyC emission may be independent of Ly$\\alpha$\nemission.\n",
"title": "Haro 11: Where is the Lyman continuum source?"
} | null | null | null | null | true | null | 339 | null | Default | null | null |
null | {
"abstract": " Dependently typed languages such as Coq are used to specify and verify the\nfull functional correctness of source programs. Type-preserving compilation can\nbe used to preserve these specifications and proofs of correctness through\ncompilation into the generated target-language programs. Unfortunately,\ntype-preserving compilation of dependent types is hard. In essence, the problem\nis that dependent type systems are designed around high-level compositional\nabstractions to decide type checking, but compilation interferes with the\ntype-system rules for reasoning about run-time terms.\nWe develop a type-preserving closure-conversion translation from the Calculus\nof Constructions (CC) with strong dependent pairs ($\\Sigma$ types)---a subset\nof the core language of Coq---to a type-safe, dependently typed compiler\nintermediate language named CC-CC. The central challenge in this work is how to\ntranslate the source type-system rules for reasoning about functions into\ntarget type-system rules for reasoning about closures. To justify these rules,\nwe prove soundness of CC-CC by giving a model in CC. In addition to type\npreservation, we prove correctness of separate compilation.\n",
"title": "Typed Closure Conversion for the Calculus of Constructions"
} | null | null | null | null | true | null | 340 | null | Default | null | null |
null | {
"abstract": " Any generic closed curve in the plane can be transformed into a simple closed\ncurve by a finite sequence of local transformations called homotopy moves. We\nprove that simplifying a planar closed curve with $n$ self-crossings requires\n$\\Theta(n^{3/2})$ homotopy moves in the worst case. Our algorithm improves the\nbest previous upper bound $O(n^2)$, which is already implicit in the classical\nwork of Steinitz; the matching lower bound follows from the construction of\nclosed curves with large defect, a topological invariant of generic closed\ncurves introduced by Aicardi and Arnold. Our lower bound also implies that\n$\\Omega(n^{3/2})$ facial electrical transformations are required to reduce any\nplane graph with treewidth $\\Omega(\\sqrt{n})$ to a single vertex, matching\nknown upper bounds for rectangular and cylindrical grid graphs. More generally,\nwe prove that transforming one immersion of $k$ circles with at most $n$\nself-crossings into another requires $\\Theta(n^{3/2} + nk + k^2)$ homotopy\nmoves in the worst case. Finally, we prove that transforming one\nnoncontractible closed curve to another on any orientable surface requires\n$\\Omega(n^2)$ homotopy moves in the worst case; this lower bound is tight if\nthe curve is homotopic to a simple closed curve.\n",
"title": "Untangling Planar Curves"
} | null | null | [
"Computer Science",
"Mathematics"
]
| null | true | null | 341 | null | Validated | null | null |
null | {
"abstract": " Consider a channel with a given input distribution. Our aim is to degrade it\nto a channel with at most L output letters. One such degradation method is the\nso called \"greedy-merge\" algorithm. We derive an upper bound on the reduction\nin mutual information between input and output. For fixed input alphabet size\nand variable L, the upper bound is within a constant factor of an\nalgorithm-independent lower bound. Thus, we establish that greedy-merge is\noptimal in the power-law sense.\n",
"title": "Greedy-Merge Degrading has Optimal Power-Law"
} | null | null | null | null | true | null | 342 | null | Default | null | null |
null | {
"abstract": " Let $L_0$ and $L_1$ be two distinct rays emanating from the origin and let\n${\\mathcal F}$ be the family of all functions holomorphic in the unit disk\n${\\mathbb D}$ for which all zeros lie on $L_0$ while all $1$-points lie on\n$L_1$. It is shown that ${\\mathcal F}$ is normal in ${\\mathbb\nD}\\backslash\\{0\\}$. The case where $L_0$ is the positive real axis and $L_1$ is\nthe negative real axis is studied in more detail.\n",
"title": "Radially distributed values and normal families"
} | null | null | null | null | true | null | 343 | null | Default | null | null |
null | {
"abstract": " A habitable exoplanet is a world that can maintain stable liquid water on its\nsurface. Techniques and approaches to characterizing such worlds are essential,\nas performing a census of Earth-like planets that may or may not have life will\ninform our understanding of how frequently life originates and is sustained on\nworlds other than our own. Observational techniques like high contrast imaging\nand transit spectroscopy can reveal key indicators of habitability for\nexoplanets. Both polarization measurements and specular reflectance from oceans\n(also known as \"glint\") can provide direct evidence for surface liquid water,\nwhile constraining surface pressure and temperature (from moderate resolution\nspectra) can indicate liquid water stability. Indirect evidence for\nhabitability can come from a variety of sources, including observations of\nvariability due to weather, surface mapping studies, and/or measurements of\nwater vapor or cloud profiles that indicate condensation near a surface.\nApproaches to making the types of measurements that indicate habitability are\ndiverse, and have different considerations for the required wavelength range,\nspectral resolution, maximum noise levels, stellar host temperature, and\nobserving geometry.\n",
"title": "Characterizing Exoplanet Habitability"
} | null | null | [
"Physics"
]
| null | true | null | 344 | null | Validated | null | null |
null | {
"abstract": " In this paper, we evaluate the accuracy of deep learning approaches on\ngeospatial vector geometry classification tasks. The purpose of this evaluation\nis to investigate the ability of deep learning models to learn from geometry\ncoordinates directly. Previous machine learning research applied to geospatial\npolygon data did not use geometries directly, but derived properties thereof.\nThese are produced by way of extracting geometry properties such as Fourier\ndescriptors. Instead, our introduced deep neural net architectures are able to\nlearn on sequences of coordinates mapped directly from polygons. In three\nclassification tasks we show that the deep learning architectures are\ncompetitive with common learning algorithms that require extracted features.\n",
"title": "Deep Learning for Classification Tasks on Geospatial Vector Polygons"
} | null | null | null | null | true | null | 345 | null | Default | null | null |
null | {
"abstract": " The distance standard deviation, which arises in distance correlation\nanalysis of multivariate data, is studied as a measure of spread. New\nrepresentations for the distance standard deviation are obtained in terms of\nGini's mean difference and in terms of the moments of spacings of order\nstatistics. Inequalities for the distance variance are derived, proving that\nthe distance standard deviation is bounded above by the classical standard\ndeviation and by Gini's mean difference. Further, it is shown that the distance\nstandard deviation satisfies the axiomatic properties of a measure of spread.\nExplicit closed-form expressions for the distance variance are obtained for a\nbroad class of parametric distributions. The asymptotic distribution of the\nsample distance variance is derived.\n",
"title": "The Distance Standard Deviation"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 346 | null | Validated | null | null |
null | {
"abstract": " One of the most basic skills a robot should possess is predicting the effect\nof physical interactions with objects in the environment. This enables optimal\naction selection to reach a certain goal state. Traditionally, dynamics are\napproximated by physics-based analytical models. These models rely on specific\nstate representations that may be hard to obtain from raw sensory data,\nespecially if no knowledge of the object shape is assumed. More recently, we\nhave seen learning approaches that can predict the effect of complex physical\ninteractions directly from sensory input. It is however an open question how\nfar these models generalize beyond their training data. In this work, we\ninvestigate the advantages and limitations of neural network based learning\napproaches for predicting the effects of actions based on sensory input and\nshow how analytical and learned models can be combined to leverage the best of\nboth worlds. As physical interaction task, we use planar pushing, for which\nthere exists a well-known analytical model and a large real-world dataset. We\npropose to use a convolutional neural network to convert raw depth images or\norganized point clouds into a suitable representation for the analytical model\nand compare this approach to using neural networks for both, perception and\nprediction. A systematic evaluation of the proposed approach on a very large\nreal-world dataset shows two main advantages of the hybrid architecture.\nCompared to a pure neural network, it significantly (i) reduces required\ntraining data and (ii) improves generalization to novel physical interaction.\n",
"title": "Combining learned and analytical models for predicting action effects"
} | null | null | null | null | true | null | 347 | null | Default | null | null |
null | {
"abstract": " In this paper, we give a complete characterization of Leavitt path algebras\nwhich are graded $\\Sigma $-$V$ rings, that is, rings over which a direct sum of\narbitrary copies of any graded simple module is graded injective. Specifically,\nwe show that a Leavitt path algebra $L$ over an arbitrary graph $E$ is a graded\n$\\Sigma $-$V$ ring if and only if it is a subdirect product of matrix rings of\narbitrary size but with finitely many non-zero entries over $K$ or\n$K[x,x^{-1}]$ with appropriate matrix gradings. We also obtain a graphical\ncharacterization of such a graded $\\Sigma $-$V$ ring $L$% . When the graph $E$\nis finite, we show that $L$ is a graded $\\Sigma $-$V$ ring $\\Longleftrightarrow\nL$ is graded directly-finite $\\Longleftrightarrow L $ has bounded index of\nnilpotence $\\Longleftrightarrow $ $L$ is graded semi-simple. Examples show that\nthe equivalence of these properties in the preceding statement no longer holds\nwhen the graph $E$ is infinite. Following this, we also characterize Leavitt\npath algebras $L$ which are non-graded $\\Sigma $-$V$ rings. Graded rings which\nare graded directly-finite are explored and it is shown that if a Leavitt path\nalgebra $L$ is a graded $\\Sigma$-$V$ ring, then $L$ is always graded\ndirectly-finite. Examples show the subtle differences between graded and\nnon-graded directly-finite rings. Leavitt path algebras which are graded\ndirectly-finite are shown to be directed unions of graded semisimple rings.\nUsing this, we give an alternative proof of a theorem of Vaš \\cite{V} on\ndirectly-finite Leavitt path algebras.\n",
"title": "Leavitt path algebras: Graded direct-finiteness and graded $Σ$-injective simple modules"
} | null | null | null | null | true | null | 348 | null | Default | null | null |
null | {
"abstract": " We derive the mean squared error convergence rates of kernel density-based\nplug-in estimators of mutual information measures between two multidimensional\nrandom variables $\\mathbf{X}$ and $\\mathbf{Y}$ for two cases: 1) $\\mathbf{X}$\nand $\\mathbf{Y}$ are both continuous; 2) $\\mathbf{X}$ is continuous and\n$\\mathbf{Y}$ is discrete. Using the derived rates, we propose an ensemble\nestimator of these information measures for the second case by taking a\nweighted sum of the plug-in estimators with varied bandwidths. The resulting\nensemble estimator achieves the $1/N$ parametric convergence rate when the\nconditional densities of the continuous variables are sufficiently smooth. To\nthe best of our knowledge, this is the first nonparametric mutual information\nestimator known to achieve the parametric convergence rate for this case, which\nfrequently arises in applications (e.g. variable selection in classification).\nThe estimator is simple to implement as it uses the solution to an offline\nconvex optimization problem and simple plug-in estimators. A central limit\ntheorem is also derived for the ensemble estimator. Ensemble estimators that\nachieve the parametric rate are also derived for the first case ($\\mathbf{X}$\nand $\\mathbf{Y}$ are both continuous) and another case 3) $\\mathbf{X}$ and\n$\\mathbf{Y}$ may have any mixture of discrete and continuous components.\n",
"title": "Ensemble Estimation of Mutual Information"
} | null | null | null | null | true | null | 349 | null | Default | null | null |
null | {
"abstract": " Widespread use of social media has led to the generation of substantial\namounts of information about individuals, including health-related information.\nSocial media provides the opportunity to study health-related information about\nselected population groups who may be of interest for a particular study. In\nthis paper, we explore the possibility of utilizing social media to perform\ntargeted data collection and analysis from a particular population group --\npregnant women. We hypothesize that we can use social media to identify cohorts\nof pregnant women and follow them over time to analyze crucial health-related\ninformation. To identify potentially pregnant women, we employ simple\nrule-based searches that attempt to detect pregnancy announcements with\nmoderate precision. To further filter out false positives and noise, we employ\na supervised classifier using a small number of hand-annotated data. We then\ncollect their posts over time to create longitudinal health timelines and\nattempt to divide the timelines into different pregnancy trimesters. Finally,\nwe assess the usefulness of the timelines by performing a preliminary analysis\nto estimate drug intake patterns of our cohort at different trimesters. Our\nrule-based cohort identification technique collected 53,820 users over thirty\nmonths from Twitter. Our pregnancy announcement classification technique\nachieved an F-measure of 0.81 for the pregnancy class, resulting in 34,895 user\ntimelines. Analysis of the timelines revealed that pertinent health-related\ninformation, such as drug-intake and adverse reactions can be mined from the\ndata. Our approach to using user timelines in this fashion has produced very\nencouraging results and can be employed for other important tasks where\ncohorts, for which health-related information may not be available from other\nsources, are required to be followed over time to derive population-based\nestimates.\n",
"title": "Social media mining for identification and exploration of health-related information from pregnant women"
} | null | null | null | null | true | null | 350 | null | Default | null | null |
null | {
"abstract": " The vortex method is a common numerical and theoretical approach used to\nimplement the motion of an ideal flow, in which the vorticity is approximated\nby a sum of point vortices, so that the Euler equations read as a system of\nordinary differential equations. Such a method is well justified in the full\nplane, thanks to the explicit representation formulas of Biot and Savart. In an\nexterior domain, we also replace the impermeable boundary by a collection of\npoint vortices generating the circulation around the obstacle. The density of\nthese point vortices is chosen in order that the flow remains tangent at\nmidpoints between adjacent vortices. In this work, we provide a rigorous\njustification for this method in exterior domains. One of the main mathematical\ndifficulties being that the Biot-Savart kernel defines a singular integral\noperator when restricted to a curve. For simplicity and clarity, we only treat\nthe case of the unit disk in the plane approximated by a uniformly distributed\nmesh of point vortices. The complete and general version of our work is\navailable in [arXiv:1707.01458].\n",
"title": "The vortex method for 2D ideal flows in the exterior of a disk"
} | null | null | [
"Mathematics"
]
| null | true | null | 351 | null | Validated | null | null |
null | {
"abstract": " Let $R$ be an associative ring with unit and denote by $K({\\rm R\n\\mbox{-}Proj})$ the homotopy category of complexes of projective left\n$R$-modules. Neeman proved the theorem that $K({\\rm R \\mbox{-}Proj})$ is\n$\\aleph_1$-compactly generated, with the category $K^+ ({\\rm R \\mbox{-}proj})$\nof left bounded complexes of finitely generated projective $R$-modules\nproviding an essentially small class of such generators. Another proof of\nNeeman's theorem is explained, using recent ideas of Christensen and Holm, and\nEmmanouil. The strategy of the proof is to show that every complex in $K({\\rm R\n\\mbox{-}Proj})$ vanishes in the Bousfield localization $K({\\rm R\n\\mbox{-}Flat})/\\langle K^+ ({\\rm R \\mbox{-}proj}) \\rangle.$\n",
"title": "Neeman's characterization of K(R-Proj) via Bousfield localization"
} | null | null | null | null | true | null | 352 | null | Default | null | null |
null | {
"abstract": " For an arbitrary finite family of semi-algebraic/definable functions, we\nconsider the corresponding inequality constraint set and we study qualification\nconditions for perturbations of this set. In particular we prove that all\npositive diagonal perturbations, save perhaps a finite number of them, ensure\nthat any point within the feasible set satisfies Mangasarian-Fromovitz\nconstraint qualification. Using the Milnor-Thom theorem, we provide a bound for\nthe number of singular perturbations when the constraints are polynomial\nfunctions. Examples show that the order of magnitude of our exponential bound\nis relevant. Our perturbation approach provides a simple protocol to build\nsequences of \"regular\" problems approximating an arbitrary\nsemi-algebraic/definable problem. Applications to sequential quadratic\nprogramming methods and sum of squares relaxation are provided.\n",
"title": "Qualification Conditions in Semi-algebraic Programming"
} | null | null | null | null | true | null | 353 | null | Default | null | null |
null | {
"abstract": " Riemannian geometry is a particular case of Hamiltonian mechanics: the orbits\nof the hamiltonian $H=\\frac{1}{2}g^{ij}p_{i}p_{j}$ are the geodesics. Given a\nsymplectic manifold (\\Gamma,\\omega), a hamiltonian $H:\\Gamma\\to\\mathbb{R}$ and\na Lagrangian sub-manifold $M\\subset\\Gamma$ we find a generalization of the\nnotion of curvature. The particular case\n$H=\\frac{1}{2}g^{ij}\\left[p_{i}-A_{i}\\right]\\left[p_{j}-A_{j}\\right]+\\phi $ of\na particle moving in a gravitational, electromagnetic and scalar fields is\nstudied in more detail. The integral of the generalized Ricci tensor w.r.t. the\nBoltzmann weight reduces to the action principle\n$\\int\\left[R+\\frac{1}{4}F_{ik}F_{jl}g^{kl}g^{ij}-g^{ij}\\partial_{i}\\phi\\partial_{j}\\phi\\right]e^{-\\phi}\\sqrt{g}d^{n}q$\nfor the scalar, vector and tensor fields.\n",
"title": "Curvature in Hamiltonian Mechanics And The Einstein-Maxwell-Dilaton Action"
} | null | null | null | null | true | null | 354 | null | Default | null | null |
null | {
"abstract": " We investigate how a neural network can learn perception actions loops for\nnavigation in unknown environments. Specifically, we consider how to learn to\nnavigate in environments populated with cul-de-sacs that represent convex local\nminima that the robot could fall into instead of finding a set of feasible\nactions that take it to the goal. Traditional methods rely on maintaining a\nglobal map to solve the problem of over coming a long cul-de-sac. However, due\nto errors induced from local and global drift, it is highly challenging to\nmaintain such a map for long periods of time. One way to mitigate this problem\nis by using learning techniques that do not rely on hand engineered map\nrepresentations and instead output appropriate control policies directly from\ntheir sensory input. We first demonstrate that such a problem cannot be solved\ndirectly by deep reinforcement learning due to the sparse reward structure of\nthe environment. Further, we demonstrate that deep supervised learning also\ncannot be used directly to solve this problem. We then investigate network\nmodels that offer a combination of reinforcement learning and supervised\nlearning and highlight the significance of adding fully differentiable memory\nunits to such networks. We evaluate our networks on their ability to generalize\nto new environments and show that adding memory to such networks offers huge\njumps in performance\n",
"title": "End-to-End Navigation in Unknown Environments using Neural Networks"
} | null | null | null | null | true | null | 355 | null | Default | null | null |
null | {
"abstract": " In this paper, we present a combinatorial approach to the opposite 2-variable\nbi-free partial $S$-transforms where the opposite multiplication is used on the\nright. In addition, extensions of this partial $S$-transforms to the\nconditional bi-free and operator-valued bi-free settings are discussed.\n",
"title": "A Combinatorial Approach to the Opposite Bi-Free Partial $S$-Transform"
} | null | null | null | null | true | null | 356 | null | Default | null | null |
null | {
"abstract": " Many giant exoplanets are found near their Roche limit and in mildly\neccentric orbits. In this study we examine the fate of such planets through\nRoche-lobe overflow as a function of the physical properties of the binary\ncomponents, including the eccentricity and the asynchronicity of the rotating\nplanet. We use a direct three-body integrator to compute the trajectories of\nthe lost mass in the ballistic limit and investigate the possible outcomes. We\nfind three different outcomes for the mass transferred through the Lagrangian\npoint $L_{1}$: (i) self-accretion by the planet, (ii) direct impact on the\nstellar surface, (iii) disk formation around the star. We explore the parameter\nspace of the three different regimes and find that at low eccentricities,\n$e\\lesssim 0.2$, mass overflow leads to disk formation for most systems, while\nfor higher eccentricities or retrograde orbits self-accretion is the only\npossible outcome. We conclude that the assumption often made in previous work\nthat when a planet overflows its Roche lobe it is quickly disrupted and\naccreted by the star is not always valid.\n",
"title": "Roche-lobe overflow in eccentric planet-star systems"
} | null | null | null | null | true | null | 357 | null | Default | null | null |
null | {
"abstract": " A new Short-Orbit Spectrometer (SOS) has been constructed and installed\nwithin the experimental facility of the A1 collaboration at Mainz Microtron\n(MAMI), with the goal to detect low-energy pions. It is equipped with a\nBrowne-Buechner magnet and a detector system consisting of two helium-ethane\nbased drift chambers and a scintillator telescope made of five layers. The\ndetector system allows detection of pions in the momentum range of 50 - 147\nMeV/c, which corresponds to 8.7 - 63 MeV kinetic energy. The spectrometer can\nbe placed at a distance range of 54 - 66 cm from the target center. Two\ncollimators are available for the measurements, one having 1.8 msr aperture and\nthe other having 7 msr aperture. The Short-Orbit Spectrometer has been\nsuccessfully calibrated and used in coincidence measurements together with the\nstandard magnetic spectrometers of the A1 collaboration.\n",
"title": "A short-orbit spectrometer for low-energy pion detection in electroproduction experiments at MAMI"
} | null | null | null | null | true | null | 358 | null | Default | null | null |
null | {
"abstract": " Capacitive deionization (CDI) is a fast-emerging water desalination\ntechnology in which a small cell voltage of ~1 V across porous carbon\nelectrodes removes salt from feedwaters via electrosorption. In flow-through\nelectrode (FTE) CDI cell architecture, feedwater is pumped through macropores\nor laser perforated channels in porous electrodes, enabling highly compact\ncells with parallel flow and electric field, as well as rapid salt removal. We\nhere present a one-dimensional model describing water desalination by FTE CDI,\nand a comparison to data from a custom-built experimental cell. The model\nemploys simple cell boundary conditions derived via scaling arguments. We show\ngood model-to-data fits with reasonable values for fitting parameters such as\nthe Stern layer capacitance, micropore volume, and attraction energy. Thus, we\ndemonstrate that from an engineering modeling perspective, an FTE CDI cell may\nbe described with simpler one-dimensional models, unlike more typical\nflow-between electrodes architecture where 2D models are required.\n",
"title": "A one-dimensional model for water desalination by flow-through electrode capacitive deionization"
} | null | null | null | null | true | null | 359 | null | Default | null | null |
null | {
"abstract": " We present bilateral teleoperation system for task learning and robot motion\ngeneration. Our system includes a bilateral teleoperation platform and a deep\nlearning software. The deep learning software refers to human demonstration\nusing the bilateral teleoperation platform to collect visual images and robotic\nencoder values. It leverages the datasets of images and robotic encoder\ninformation to learn about the inter-modal correspondence between visual images\nand robot motion. In detail, the deep learning software uses a combination of\nDeep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent\nNeural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor\nangles, to learn motion taught be human teleoperation. The learnt models are\nused to predict new motion trajectories for similar tasks. Experimental results\nshow that our system has the adaptivity to generate motion for similar scooping\ntasks. Detailed analysis is performed based on failure cases of the\nexperimental results. Some insights about the cans and cannots of the system\nare summarized.\n",
"title": "Deep Learning Scooping Motion using Bilateral Teleoperations"
} | null | null | null | null | true | null | 360 | null | Default | null | null |
null | {
"abstract": " We propose two multimodal deep learning architectures that allow for\ncross-modal dataflow (XFlow) between the feature extractors, thereby extracting\nmore interpretable features and obtaining a better representation than through\nunimodal learning, for the same amount of training data. These models can\nusefully exploit correlations between audio and visual data, which have a\ndifferent dimensionality and are therefore nontrivially exchangeable. Our work\nimproves on existing multimodal deep learning metholodogies in two essential\nways: (1) it presents a novel method for performing cross-modality (before\nfeatures are learned from individual modalities) and (2) extends the previously\nproposed cross-connections, which only transfer information between streams\nthat process compatible data. Both cross-modal architectures outperformed their\nbaselines (by up to 7.5%) when evaluated on the AVletters dataset.\n",
"title": "XFlow: 1D-2D Cross-modal Deep Neural Networks for Audiovisual Classification"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 361 | null | Validated | null | null |
null | {
"abstract": " Educational research has shown that narratives are useful tools that can help\nyoung students make sense of scientific phenomena. Based on previous research,\nI argue that narratives can also become tools for high school students to make\nsense of concepts such as the electric field. In this paper I examine high\nschool students visual and oral narratives in which they describe the\ninteraction among electric charges as if they were characters of a cartoon\nseries. The study investigates: given the prompt to produce narratives for\nelectrostatic phenomena during a classroom activity prior to receiving formal\ninstruction, (1) what ideas of electrostatics do students attend to in their\nnarratives?; (2) what role do students narratives play in their understanding\nof electrostatics? The participants were a group of high school students\nengaged in an open-ended classroom activity prior to receiving formal\ninstruction about electrostatics. During the activity, the group was asked to\ndraw comic strips for electric charges. In addition to individual work,\nstudents shared their work within small groups as well as with the whole group.\nPost activity, six students from a small group were interviewed individually\nabout their work. In this paper I present two cases in which students produced\nnarratives to express their ideas about electrostatics in different ways. In\neach case, I present student work for the comic strip activity (visual\nnarratives), their oral descriptions of their work (oral narratives) during the\ninterview and/or to their peers during class, and the their ideas of the\nelectric interactions expressed through their narratives.\n",
"title": "Making Sense of Physics through Stories: High School Students Narratives about Electric Charges and Interactions"
} | null | null | null | null | true | null | 362 | null | Default | null | null |
null | {
"abstract": " Hospital acquired infections (HAI) are infections acquired within the\nhospital from healthcare workers, patients or from the environment, but which\nhave no connection to the initial reason for the patient's hospital admission.\nHAI are a serious world-wide problem, leading to an increase in mortality\nrates, duration of hospitalisation as well as significant economic burden on\nhospitals. Although clear preventive guidelines exist, studies show that\ncompliance to them is frequently poor. This paper details the software\nperspective for an innovative, business process software based cyber-physical\nsystem that will be implemented as part of a European Union-funded research\nproject. The system is composed of a network of sensors mounted in different\nsites around the hospital, a series of wearables used by the healthcare workers\nand a server side workflow engine. For better understanding, we describe the\nsystem through the lens of a single, simple clinical workflow that is\nresponsible for a significant portion of all hospital infections. The goal is\nthat when completed, the system will be configurable in the sense of\nfacilitating the creation and automated monitoring of those clinical workflows\nthat when combined, account for over 90\\% of hospital infections.\n",
"title": "Preventing Hospital Acquired Infections Through a Workflow-Based Cyber-Physical System"
} | null | null | null | null | true | null | 363 | null | Default | null | null |
null | {
"abstract": " We advocate the use of curated, comprehensive benchmark suites of machine\nlearning datasets, backed by standardized OpenML-based interfaces and\ncomplementary software toolkits written in Python, Java and R. Major\ndistinguishing features of OpenML benchmark suites are (a) ease of use through\nstandardized data formats, APIs, and existing client libraries; (b)\nmachine-readable meta-information regarding the contents of the suite; and (c)\nonline sharing of results, enabling large scale comparisons. As a first such\nsuite, we propose the OpenML100, a machine learning benchmark suite of\n100~classification datasets carefully curated from the thousands of datasets\navailable on OpenML.org.\n",
"title": "OpenML Benchmarking Suites and the OpenML100"
} | null | null | null | null | true | null | 364 | null | Default | null | null |
null | {
"abstract": " In this article, we study orbifold constructions associated with the Leech\nlattice vertex operator algebra. As an application, we prove that the structure\nof a strongly regular holomorphic vertex operator algebra of central charge\n$24$ is uniquely determined by its weight one Lie algebra if the Lie algebra\nhas the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$,\n$A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$ by using the reverse\norbifold construction. Our result also provides alternative constructions of\nthese vertex operator algebras (except for the case $A_{6,7}$) from the Leech\nlattice vertex operator algebra.\n",
"title": "On orbifold constructions associated with the Leech lattice vertex operator algebra"
} | null | null | null | null | true | null | 365 | null | Default | null | null |
null | {
"abstract": " Deconstruction of the theme of the 2017 FQXi essay contest is already an\ninteresting exercise in its own right: Teleology is rarely useful in physics\n--- the only known mainstream physics example (black hole event horizons) has a\nvery mixed score-card --- so the \"goals\" and \"aims and intentions\" alluded to\nin the theme of the 2017 FQXi essay contest are already somewhat pushing the\nlimits. Furthermore, \"aims and intentions\" certainly carries the implication of\nconsciousness, and opens up a whole can of worms related to the mind-body\nproblem. As for \"mindless mathematical laws\", that allusion is certainly in\ntension with at least some versions of the \"mathematical universe hypothesis\".\nFinally \"wandering towards a goal\" again carries the implication of\nconsciousness, with all its attendant problems.\nIn this essay I will argue, simply because we do not yet have any really good\nmathematical or physical theory of consciousness, that the theme of this essay\ncontest is premature, and unlikely to lead to any resolution that would be\nwidely accepted in the mathematics or physics communities.\n",
"title": "From mindless mathematics to thinking meat?"
} | null | null | null | null | true | null | 366 | null | Default | null | null |
null | {
"abstract": " The Atacama Large millimetre/submillimetre Array (ALMA) makes use of water\nvapour radiometers (WVR), which monitor the atmospheric water vapour line at\n183 GHz along the line of sight above each antenna to correct for phase delays\nintroduced by the wet component of the troposphere. The application of WVR\nderived phase corrections improve the image quality and facilitate successful\nobservations in weather conditions that were classically marginal or poor. We\npresent work to indicate that a scaling factor applied to the WVR solutions can\nact to further improve the phase stability and image quality of ALMA data. We\nfind reduced phase noise statistics for 62 out of 75 datasets from the\nlong-baseline science verification campaign after a WVR scaling factor is\napplied. The improvement of phase noise translates to an expected coherence\nimprovement in 39 datasets. When imaging the bandpass source, we find 33 of the\n39 datasets show an improvement in the signal-to-noise ratio (S/N) between a\nfew to ~30 percent. There are 23 datasets where the S/N of the science image is\nimproved: 6 by <1%, 11 between 1 and 5%, and 6 above 5%. The higher frequencies\nstudied (band 6 and band 7) are those most improved, specifically datasets with\nlow precipitable water vapour (PWV), <1mm, where the dominance of the wet\ncomponent is reduced. Although these improvements are not profound, phase\nstability improvements via the WVR scaling factor come into play for the higher\nfrequency (>450 GHz) and long-baseline (>5km) observations. These inherently\nhave poorer phase stability and are taken in low PWV (<1mm) conditions for\nwhich we find the scaling to be most effective. A promising explanation for the\nscaling factor is the mixing of dry and wet air components, although other\norigins are discussed. We have produced a python code to allow ALMA users to\nundertake WVR scaling tests and make improvements to their data.\n",
"title": "Phase correction for ALMA - Investigating water vapour radiometer scaling:The long-baseline science verification data case study"
} | null | null | null | null | true | null | 367 | null | Default | null | null |
null | {
"abstract": " In this paper, we study the $\\mu$-ordinary locus of a Shimura variety with\nparahoric level structure. Under the axioms in \\cite{HR}, we show that\n$\\mu$-ordinary locus is a union of some maximal Ekedahl-Kottwitz-Oort-Rapoport\nstrata introduced in \\cite{HR} and we give criteria on the density of the\n$\\mu$-ordinary locus.\n",
"title": "On the $μ$-ordinary locus of a Shimura variety"
} | null | null | null | null | true | null | 368 | null | Default | null | null |
null | {
"abstract": " Charts are an excellent way to convey patterns and trends in data, but they\ndo not facilitate further modeling of the data or close inspection of\nindividual data points. We present a fully automated system for extracting the\nnumerical values of data points from images of scatter plots. We use deep\nlearning techniques to identify the key components of the chart, and optical\ncharacter recognition together with robust regression to map from pixels to the\ncoordinate system of the chart. We focus on scatter plots with linear scales,\nwhich already have several interesting challenges. Previous work has done fully\nautomatic extraction for other types of charts, but to our knowledge this is\nthe first approach that is fully automatic for scatter plots. Our method\nperforms well, achieving successful data extraction on 89% of the plots in our\ntest set.\n",
"title": "Scatteract: Automated extraction of data from scatter plots"
} | null | null | null | null | true | null | 369 | null | Default | null | null |
null | {
"abstract": " We consider a modification to the standard cosmological history consisting of\nintroducing a new species $\\phi$ whose energy density red-shifts with the scale\nfactor $a$ like $\\rho_\\phi \\propto a^{-(4+n)}$. For $n>0$, such a red-shift is\nfaster than radiation, hence the new species dominates the energy budget of the\nuniverse at early times while it is completely negligible at late times. If\nequality with the radiation energy density is achieved at low enough\ntemperatures, dark matter can be produced as a thermal relic during the new\ncosmological phase. Dark matter freeze-out then occurs at higher temperatures\ncompared to the standard case, implying that reproducing the observed abundance\nrequires significantly larger annihilation rates. Here, we point out a\ncompletely new phenomenon, which we refer to as $\\textit{relentless}$ dark\nmatter: for large enough $n$, unlike the standard case where annihilation ends\nshortly after the departure from thermal equilibrium, dark matter particles\nkeep annihilating long after leaving chemical equilibrium, with a significant\ndepletion of the final relic abundance. Relentless annihilation occurs for $n\n\\geq 2$ and $n \\geq 4$ for s-wave and p-wave annihilation, respectively, and it\nthus occurs in well motivated scenarios such as a quintessence with a kination\nphase. We discuss a few microscopic realizations for the new cosmological\ncomponent and highlight the phenomenological consequences of our calculations\nfor dark matter searches.\n",
"title": "When the Universe Expands Too Fast: Relentless Dark Matter"
} | null | null | null | null | true | null | 370 | null | Default | null | null |
null | {
"abstract": " We describe the configuration space $\\mathbf{S}$ of polygons with prescribed\nedge slopes, and study the perimeter $\\mathcal{P}$ as a Morse function on\n$\\mathbf{S}$. We characterize critical points of $\\mathcal{P}$ (these are\n\\textit{tangential} polygons) and compute their Morse indices. This setup is\nmotivated by a number of results about critical points and Morse indices of the\noriented area function defined on the configuration space of polygons with\nprescribed edge lengths (flexible polygons). As a by-product, we present an\nindependent computation of the Morse index of the area function (obtained\nearlier by G. Panina and A. Zhukova).\n",
"title": "Polygons with prescribed edge slopes: configuration space and extremal points of perimeter"
} | null | null | null | null | true | null | 371 | null | Default | null | null |
null | {
"abstract": " This paper discusses a Metropolis-Hastings algorithm developed by\n\\citeA{MarsmanIsing}. The algorithm is derived from first principles, and it is\nproven that the algorithm becomes more efficient with more data and meets the\ngrowing demands of large scale educational measurement.\n",
"title": "An Asymptotically Efficient Metropolis-Hastings Sampler for Bayesian Inference in Large-Scale Educational Measuremen"
} | null | null | null | null | true | null | 372 | null | Default | null | null |
null | {
"abstract": " Luke P. Lee is a Tan Chin Tuan Centennial Professor at the National\nUniversity of Singapore. In this contribution he describes the power of\noptofluidics as a research tool and reviews new insights within the areas of\nsingle cell analysis, microphysiological analysis, and integrated systems.\n",
"title": "When Streams of Optofluidics Meet the Sea of Life"
} | null | null | null | null | true | null | 373 | null | Default | null | null |
null | {
"abstract": " Topology has appeared in different physical contexts. The most prominent\napplication is topologically protected edge transport in condensed matter\nphysics. The Chern number, the topological invariant of gapped Bloch\nHamiltonians, is an important quantity in this field. Another example of\ntopology, in polarization physics, are polarization singularities, called L\nlines and C points. By establishing a connection between these two theories, we\ndevelop a novel technique to visualize and potentially measure the Chern\nnumber: it can be expressed either as the winding of the polarization azimuth\nalong L lines in reciprocal space, or in terms of the handedness and the index\nof the C points. For mechanical systems, this is directly connected to the\nvisible motion patterns.\n",
"title": "L lines, C points and Chern numbers: understanding band structure topology using polarization fields"
} | null | null | null | null | true | null | 374 | null | Default | null | null |
null | {
"abstract": " We study the scale and tidy subgroups of an endomorphism of a totally\ndisconnected locally compact group using a geometric framework. This leads to\nnew interpretations of tidy subgroups and the scale function. Foremost, we\nobtain a geometric tidying procedure which applies to endomorphisms as well as\na geometric proof of the fact that tidiness is equivalent to being minimizing\nfor a given endomorphism. Our framework also yields an endomorphism version of\nthe Baumgartner-Willis tree representation theorem. We conclude with a\nconstruction of new endomorphisms of totally disconnected locally compact\ngroups from old via HNN-extensions.\n",
"title": "Willis Theory via Graphs"
} | null | null | null | null | true | null | 375 | null | Default | null | null |
null | {
"abstract": " The coupled exciton-vibrational dynamics of a three-site model of the FMO\ncomplex is investigated using the Multi-layer Multi-configuration\nTime-dependent Hartree (ML-MCTDH) approach. Emphasis is put on the effect of\nthe spectral density on the exciton state populations as well as on the\nvibrational and vibronic non-equilibrium excitations. Models which use either a\nsingle or site-specific spectral densities are contrasted to a spectral density\nadapted from experiment. For the transfer efficiency, the total integrated\nHuang-Rhys factor is found to be more important than details of the spectral\ndistributions. However, the latter are relevant for the obtained\nnon-equilibrium vibrational and vibronic distributions and thus influence the\nactual pattern of population relaxation.\n",
"title": "The Effect of Site-Specific Spectral Densities on the High-Dimensional Exciton-Vibrational Dynamics in the FMO Complex"
} | null | null | null | null | true | null | 376 | null | Default | null | null |
null | {
"abstract": " In the first part of this work we show the convergence with respect to an\nasymptotic parameter {\\epsilon} of a delayed heat equation. It represents a\nmathematical extension of works considered previously by the authors [Milisic\net al. 2011, Milisic et al. 2016]. Namely, this is the first result involving\ndelay operators approximating protein linkages coupled with a spatial elliptic\nsecond order operator. For the sake of simplicity we choose the Laplace\noperator, although more general results could be derived. The main arguments\nare (i) new energy estimates and (ii) a stability result extended from the\nprevious work to this more involved context. They allow to prove convergence of\nthe delay operator to a friction term together with the Laplace operator in the\nsame asymptotic regime considered without the space dependence in [Milisic et\nal, 2011]. In a second part we extend fixed-point results for the fully\nnon-linear model introduced in [Milisic et al, 2016] and prove global existence\nin time. This shows that the blow-up scenario observed previously does not\noccur. Since the latter result was interpreted as a rupture of adhesion forces,\nwe discuss the possibility of bond breaking both from the analytic and\nnumerical point of view.\n",
"title": "Space dependent adhesion forces mediated by transient elastic linkages : new convergence and global existence results"
} | null | null | null | null | true | null | 377 | null | Default | null | null |
null | {
"abstract": " We study the Nonparametric Maximum Likelihood Estimator (NPMLE) for\nestimating Gaussian location mixture densities in $d$-dimensions from\nindependent observations. Unlike usual likelihood-based methods for fitting\nmixtures, NPMLEs are based on convex optimization. We prove finite sample\nresults on the Hellinger accuracy of every NPMLE. Our results imply, in\nparticular, that every NPMLE achieves near parametric risk (up to logarithmic\nmultiplicative factors) when the true density is a discrete Gaussian mixture\nwithout any prior information on the number of mixture components. NPMLEs can\nnaturally be used to yield empirical Bayes estimates of the Oracle Bayes\nestimator in the Gaussian denoising problem. We prove bounds for the accuracy\nof the empirical Bayes estimate as an approximation to the Oracle Bayes\nestimator. Here our results imply that the empirical Bayes estimator performs\nat nearly the optimal level (up to logarithmic multiplicative factors) for\ndenoising in clustering situations without any prior knowledge of the number of\nclusters.\n",
"title": "On the nonparametric maximum likelihood estimator for Gaussian location mixture densities with application to Gaussian denoising"
} | null | null | null | null | true | null | 378 | null | Default | null | null |
null | {
"abstract": " Parametric resonance is among the most efficient phenomena generating\ngravitational waves (GWs) in the early Universe. The dynamics of parametric\nresonance, and hence of the GWs, depend exclusively on the resonance parameter\n$q$. The latter is determined by the properties of each scenario: the initial\namplitude and potential curvature of the oscillating field, and its coupling to\nother species. Previous works have only studied the GW production for fixed\nvalue(s) of $q$. We present an analytical derivation of the GW amplitude\ndependence on $q$, valid for any scenario, which we confront against numerical\nresults. By running lattice simulations in an expanding grid, we study for a\nwide range of $q$ values, the production of GWs in post-inflationary preheating\nscenarios driven by parametric resonance. We present simple fits for the final\namplitude and position of the local maxima in the GW spectrum. Our\nparametrization allows to predict the location and amplitude of the GW\nbackground today, for an arbitrary $q$. The GW signal can be rather large, as\n$h^2\\Omega_{\\rm GW}(f_p) \\lesssim 10^{-11}$, but it is always peaked at high\nfrequencies $f_p \\gtrsim 10^{7}$ Hz. We also discuss the case of\nspectator-field scenarios, where the oscillatory field can be e.g.~a curvaton,\nor the Standard Model Higgs.\n",
"title": "Gravitational wave production from preheating: parameter dependence"
} | null | null | [
"Physics"
]
| null | true | null | 379 | null | Validated | null | null |
null | {
"abstract": " The 1+1 REMPI spectrum of SiO in the 210-220 nm range is recorded. Observed\nbands are assigned to the $A-X$ vibrational bands $(v``=0-3, v`=5-10)$ and a\ntentative assignment is given to the 2-photon transition from $X$ to the\nn=12-13 $[X^{2}{\\Sigma}^{+},v^{+}=1]$ Rydberg states at 216-217 nm. We estimate\nthe IP of SiO to be 11.59(1) eV. The SiO$^{+}$ cation has previously been\nidentified as a molecular candidate amenable to laser control. Our work allows\nus to identify an efficient method for loading cold SiO$^{+}$ from an ablated\nsample of SiO into an ion trap via the $(5,0)$ $A-X$ band at 213.977 nm.\n",
"title": "IP determination and 1+1 REMPI spectrum of SiO at 210-220 nm with implications for SiO$^{+}$ ion trap loading"
} | null | null | null | null | true | null | 380 | null | Default | null | null |
null | {
"abstract": " Robots and automated systems are increasingly being introduced to unknown and\ndynamic environments where they are required to handle disturbances, unmodeled\ndynamics, and parametric uncertainties. Robust and adaptive control strategies\nare required to achieve high performance in these dynamic environments. In this\npaper, we propose a novel adaptive model predictive controller that combines\nmodel predictive control (MPC) with an underlying $\\mathcal{L}_1$ adaptive\ncontroller to improve trajectory tracking of a system subject to unknown and\nchanging disturbances. The $\\mathcal{L}_1$ adaptive controller forces the\nsystem to behave in a predefined way, as specified by a reference model. A\nhigher-level model predictive controller then uses this reference model to\ncalculate the optimal reference input based on a cost function, while taking\ninto account input and state constraints. We focus on the experimental\nvalidation of the proposed approach and demonstrate its effectiveness in\nexperiments on a quadrotor. We show that the proposed approach has a lower\ntrajectory tracking error compared to non-predictive, adaptive approaches and a\npredictive, non-adaptive approach, even when external wind disturbances are\napplied.\n",
"title": "Adaptive Model Predictive Control for High-Accuracy Trajectory Tracking in Changing Conditions"
} | null | null | null | null | true | null | 381 | null | Default | null | null |
null | {
"abstract": " With the use of ontologies in several domains such as semantic web,\ninformation retrieval, artificial intelligence, the concept of similarity\nmeasuring has become a very important domain of research. Therefore, in the\ncurrent paper, we propose our method of similarity measuring which uses the\nDijkstra algorithm to define and compute the shortest path. Then, we use this\none to compute the semantic distance between two concepts defined in the same\nhierarchy of ontology. Afterward, we base on this result to compute the\nsemantic similarity. Finally, we present an experimental comparison between our\nmethod and other methods of similarity measuring.\n",
"title": "An enhanced method to compute the similarity between concepts of ontology"
} | null | null | null | null | true | null | 382 | null | Default | null | null |
null | {
"abstract": " In this short note, we present a novel method for computing exact lower and\nupper bounds of eigenvalues of a symmetric tridiagonal interval matrix.\nCompared to the known methods, our approach is fast, simple to present and to\nimplement, and avoids any assumptions. Our construction explicitly yields those\nmatrices for which particular lower and upper bounds are attained.\n",
"title": "Eigenvalues of symmetric tridiagonal interval matrices revisited"
} | null | null | [
"Computer Science"
]
| null | true | null | 383 | null | Validated | null | null |
null | {
"abstract": " Visualizing a complex network is computationally intensive process and\ndepends heavily on the number of components in the network. One way to solve\nthis problem is not to render the network in real time. PRE-render Content\nUsing Tiles (PRECUT) is a process to convert any complex network into a\npre-rendered network. Tiles are generated from pre-rendered images at different\nzoom levels, and navigating the network simply becomes delivering relevant\ntiles. PRECUT is exemplified by performing large-scale compound-target\nrelationship analyses. Matched molecular pair (MMP) networks were created using\ncompounds and the target class description found in the ChEMBL database. To\nvisualize MMP networks, the MMP network viewer has been implemented in COMBINE\nand as a web application, hosted at this http URL.\n",
"title": "PRE-render Content Using Tiles (PRECUT). 1. Large-Scale Compound-Target Relationship Analyses"
} | null | null | null | null | true | null | 384 | null | Default | null | null |
null | {
"abstract": " We prove that every triangle-free graph with maximum degree $\\Delta$ has list\nchromatic number at most $(1+o(1))\\frac{\\Delta}{\\ln \\Delta}$. This matches the\nbest-known bound for graphs of girth at least 5. We also provide a new proof\nthat for any $r\\geq 4$ every $K_r$-free graph has list-chromatic number at most\n$200r\\frac{\\Delta\\ln\\ln\\Delta}{\\ln\\Delta}$.\n",
"title": "The list chromatic number of graphs with small clique number"
} | null | null | [
"Mathematics"
]
| null | true | null | 385 | null | Validated | null | null |
null | {
"abstract": " We study the two-dimensional topology of the galactic distribution when\nprojected onto two-dimensional spherical shells. Using the latest Horizon Run 4\nsimulation data, we construct the genus of the two-dimensional field and\nconsider how this statistic is affected by late-time nonlinear effects --\nprincipally gravitational collapse and redshift space distortion (RSD). We also\nconsider systematic and numerical artifacts such as shot noise, galaxy bias,\nand finite pixel effects. We model the systematics using a Hermite polynomial\nexpansion and perform a comprehensive analysis of known effects on the\ntwo-dimensional genus, with a view toward using the statistic for cosmological\nparameter estimation. We find that the finite pixel effect is dominated by an\namplitude drop and can be made less than $1\\%$ by adopting pixels smaller than\n$1/3$ of the angular smoothing length. Nonlinear gravitational evolution\nintroduces time-dependent coefficients of the zeroth, first, and second Hermite\npolynomials, but the genus amplitude changes by less than $1\\%$ between $z=1$\nand $z=0$ for smoothing scales $R_{\\rm G} > 9 {\\rm Mpc/h}$. Non-zero terms are\nmeasured up to third order in the Hermite polynomial expansion when studying\nRSD. Differences in shapes of the genus curves in real and redshift space are\nsmall when we adopt thick redshift shells, but the amplitude change remains a\nsignificant $\\sim {\\cal O}(10\\%)$ effect. The combined effects of galaxy\nbiasing and shot noise produce systematic effects up to the second Hermite\npolynomial. It is shown that, when sampling, the use of galaxy mass cuts\nsignificantly reduces the effect of shot noise relative to random sampling.\n",
"title": "Topology of Large-Scale Structures of Galaxies in Two Dimensions - Systematic Effects"
} | null | null | null | null | true | null | 386 | null | Default | null | null |
null | {
"abstract": " The new index of the author's popularity estimation is represented in the\npaper. The index is calculated on the basis of Wikipedia encyclopedia analysis\n(Wikipedia Index - WI). Unlike the conventional existed citation indices, the\nsuggested mark allows to evaluate not only the popularity of the author, as it\ncan be done by means of calculating the general citation number or by the\nHirsch index, which is often used to measure the author's research rate. The\nindex gives an opportunity to estimate the author's popularity, his/her\ninfluence within the sought-after area \"knowledge area\" in the Internet - in\nthe Wikipedia. The suggested index is supposed to be calculated in frames of\nthe subject domain, and it, on the one hand, avoids the mistaken computation of\nthe homonyms, and on the other hand - provides the entirety of the subject\narea. There are proposed algorithms and the technique of the Wikipedia Index\ncalculation through the network encyclopedia sounding, the exemplified\ncalculations of the index for the prominent researchers, and also the methods\nof the information networks formation - models of the subject domains by the\nautomatic monitoring and networks information reference resources analysis. The\nconsidered in the paper notion network corresponds the terms-heads of the\nWikipedia articles.\n",
"title": "Wiki-index of authors popularity"
} | null | null | [
"Computer Science"
]
| null | true | null | 387 | null | Validated | null | null |
null | {
"abstract": " We compute the genus 0 Belyi map for the sporadic Janko group J1 of degree\n266 and describe the applied method. This yields explicit polynomials having J1\nas a Galois group over K(t), [K:Q] = 7.\n",
"title": "Belyi map for the sporadic group J1"
} | null | null | null | null | true | null | 388 | null | Default | null | null |
null | {
"abstract": " Our goal is to find classes of convolution semigroups on Lie groups $G$ that\ngive rise to interesting processes in symmetric spaces $G/K$. The\n$K$-bi-invariant convolution semigroups are a well-studied example. An\nappealing direction for the next step is to generalise to right $K$-invariant\nconvolution semigroups, but recent work of Liao has shown that these are in\none-to-one correspondence with $K$-bi-invariant convolution semigroups. We\ninvestigate a weaker notion of right $K$-invariance, but show that this is, in\nfact, the same as the usual notion. Another possible approach is to use\ngeneralised notions of negative definite functions, but this also leads to\nnothing new. We finally find an interesting class of convolution semigroups\nthat are obtained by making use of the Cartan decomposition of a semisimple Lie\ngroup, and the solution of certain stochastic differential equations. Examples\nsuggest that these are well-suited for generating random motion along geodesics\nin symmetric spaces.\n",
"title": "Convolution Semigroups of Probability Measures on Gelfand Pairs, Revisited"
} | null | null | null | null | true | null | 389 | null | Default | null | null |
null | {
"abstract": " CMO Council reports that 71\\% of internet users in the U.S. were influenced\nby coupons and discounts when making their purchase decisions. It has also been\nshown that offering coupons to a small fraction of users (called seed users)\nmay affect the purchase decisions of many other users in a social network. This\nmotivates us to study the optimal coupon allocation problem, and our objective\nis to allocate coupons to a set of users so as to maximize the expected\ncascade. Different from existing studies on influence maximizaton (IM), our\nframework allows a general utility function and a more complex set of\nconstraints. In particular, we formulate our problem as an approximate\nsubmodular maximization problem subject to matroid and knapsack constraints.\nExisting techniques relying on the submodularity of the utility function, such\nas greedy algorithm, can not work directly on a non-submodular function. We use\n$\\epsilon$ to measure the difference between our function and its closest\nsubmodular function and propose a novel approximate algorithm with\napproximation ratio $\\beta(\\epsilon)$ with $\\lim_{\\epsilon\\rightarrow\n0}\\beta(\\epsilon)=1-1/e$. This is the best approximation guarantee for\napproximate submodular maximization subject to a partition matroid and knapsack\nconstraints, our results apply to a broad range of optimization problems that\ncan be formulated as an approximate submodular maximization problem.\n",
"title": "Toward Optimal Coupon Allocation in Social Networks: An Approximate Submodular Optimization Approach"
} | null | null | null | null | true | null | 390 | null | Default | null | null |
null | {
"abstract": " We prove the Lefschetz duality for intersection (co)homology in the framework\nof $\\partial$-pesudomanifolds. We work with general perversities and without\nrestriction on the coefficient ring.\n",
"title": "Lefschetz duality for intersection (co)homology"
} | null | null | null | null | true | null | 391 | null | Default | null | null |
null | {
"abstract": " All possible removals of $n=5$ nodes from networks of size $N=100$ are\nperformed in order to find the optimal set of nodes which fragments the\noriginal network into the smallest largest connected component. The resulting\nattacks are ordered according to the size of the largest connected component\nand compared with the state of the art methods of network attacks. We chose\nattacks of size $5$ on relative small networks of size $100$ because the number\nof all possible attacks, ${100}\\choose{5}$ $\\approx 10^8$, is at the verge of\nthe possible to compute with the available standard computers. Besides, we\napplied the procedure in a series of networks with controlled and varied\nmodularity, comparing the resulting statistics with the effect of removing the\nsame amount of vertices, according to the known most efficient disruption\nstrategies, i.e., High Betweenness Adaptive attack (HBA), Collective Index\nattack (CI), and Modular Based Attack (MBA). Results show that modularity has\nan inverse relation with robustness, with $Q_c \\approx 0.7$ being the critical\nvalue. For modularities lower than $Q_c$, all heuristic method gives mostly the\nsame results than with random attacks, while for bigger $Q$, networks are less\nrobust and highly vulnerable to malicious attacks.\n",
"title": "Empirical determination of the optimum attack for fragmentation of modular networks"
} | null | null | null | null | true | null | 392 | null | Default | null | null |
null | {
"abstract": " In this paper, we study stochastic non-convex optimization with non-convex\nrandom functions. Recent studies on non-convex optimization revolve around\nestablishing second-order convergence, i.e., converging to a nearly\nsecond-order optimal stationary points. However, existing results on stochastic\nnon-convex optimization are limited, especially with a high probability\nsecond-order convergence. We propose a novel updating step (named NCG-S) by\nleveraging a stochastic gradient and a noisy negative curvature of a stochastic\nHessian, where the stochastic gradient and Hessian are based on a proper\nmini-batch of random functions. Building on this step, we develop two\nalgorithms and establish their high probability second-order convergence. To\nthe best of our knowledge, the proposed stochastic algorithms are the first\nwith a second-order convergence in {\\it high probability} and a time complexity\nthat is {\\it almost linear} in the problem's dimensionality.\n",
"title": "Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence"
} | null | null | null | null | true | null | 393 | null | Default | null | null |
null | {
"abstract": " Lower bounds on the smallest eigenvalue of a symmetric positive definite\nmatrices $A\\in\\mathbb{R}^{m\\times m}$ play an important role in condition\nnumber estimation and in iterative methods for singular value computation. In\nparticular, the bounds based on ${\\rm Tr}(A^{-1})$ and ${\\rm Tr}(A^{-2})$\nattract attention recently because they can be computed in $O(m)$ work when $A$\nis tridiagonal. In this paper, we focus on these bounds and investigate their\nproperties in detail. First, we consider the problem of finding the optimal\nbound that can be computed solely from ${\\rm Tr}(A^{-1})$ and ${\\rm\nTr}(A^{-2})$ and show that so called Laguerre's lower bound is the optimal one\nin terms of sharpness. Next, we study the gap between the Laguerre bound and\nthe smallest eigenvalue. We characterize the situation in which the gap becomes\nlargest in terms of the eigenvalue distribution of $A$ and show that the gap\nbecomes smallest when ${\\rm Tr}(A^{-2})/\\{{\\rm Tr}(A^{-1})\\}^2$ approaches 1 or\n$\\frac{1}{m}$. These results will be useful, for example, in designing\nefficient shift strategies for singular value computation algorithms.\n",
"title": "On the optimality and sharpness of Laguerre's lower bound on the smallest eigenvalue of a symmetric positive definite matrix"
} | null | null | null | null | true | null | 394 | null | Default | null | null |
null | {
"abstract": " This paper describes the development of a magnetic attitude control subsystem\nfor a 2U cubesat. Due to the presence of gravity gradient torques, the\nsatellite dynamics are open-loop unstable near the desired pointing\nconfiguration. Nevertheless the linearized time-varying system is completely\ncontrollable, under easily verifiable conditions, and the system's disturbance\nrejection capabilities can be enhanced by adding air drag panels exemplifying a\nbeneficial interplay between hardware design and control. In the paper,\nconditions for the complete controllability for the case of a magnetically\ncontrolled satellite with passive air drag panels are developed, and simulation\ncase studies with the LQR and MPC control designs applied in combination with a\nnonlinear time-varying input transformation are presented to demonstrate the\nability of the closed-loop system to satisfy mission objectives despite\ndisturbance torques.\n",
"title": "Attitude Control of a 2U Cubesat by Magnetic and Air Drag Torques"
} | null | null | null | null | true | null | 395 | null | Default | null | null |
null | {
"abstract": " One advantage of decision tree based methods like random forests is their\nability to natively handle categorical predictors without having to first\ntransform them (e.g., by using feature engineering techniques). However, in\nthis paper, we show how this capability can lead to an inherent \"absent levels\"\nproblem for decision tree based methods that has never been thoroughly\ndiscussed, and whose consequences have never been carefully explored. This\nproblem occurs whenever there is an indeterminacy over how to handle an\nobservation that has reached a categorical split which was determined when the\nobservation in question's level was absent during training. Although these\nincidents may appear to be innocuous, by using Leo Breiman and Adele Cutler's\nrandom forests FORTRAN code and the randomForest R package (Liaw and Wiener,\n2002) as motivating case studies, we examine how overlooking the absent levels\nproblem can systematically bias a model. Furthermore, by using three real data\nexamples, we illustrate how absent levels can dramatically alter a model's\nperformance in practice, and we empirically demonstrate how some simple\nheuristics can be used to help mitigate the effects of the absent levels\nproblem until a more robust theoretical solution is found.\n",
"title": "Random Forests, Decision Trees, and Categorical Predictors: The \"Absent Levels\" Problem"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 396 | null | Validated | null | null |
null | {
"abstract": " For decades, conventional computers based on the von Neumann architecture\nhave performed computation by repeatedly transferring data between their\nprocessing and their memory units, which are physically separated. As\ncomputation becomes increasingly data-centric and as the scalability limits in\nterms of performance and power are being reached, alternative computing\nparadigms are searched for in which computation and storage are collocated. A\nfascinating new approach is that of computational memory where the physics of\nnanoscale memory devices are used to perform certain computational tasks within\nthe memory unit in a non-von Neumann manner. Here we present a large-scale\nexperimental demonstration using one million phase-change memory devices\norganized to perform a high-level computational primitive by exploiting the\ncrystallization dynamics. Also presented is an application of such a\ncomputational memory to process real-world data-sets. The results show that\nthis co-existence of computation and storage at the nanometer scale could be\nthe enabler for new, ultra-dense, low power, and massively parallel computing\nsystems.\n",
"title": "Temporal correlation detection using computational phase-change memory"
} | null | null | null | null | true | null | 397 | null | Default | null | null |
null | {
"abstract": " We generalise some well-known graph parameters to operator systems by\nconsidering their underlying quantum channels. In particular, we introduce the\nquantum complexity as the dimension of the smallest co-domain Hilbert space a\nquantum channel requires to realise a given operator system as its\nnon-commutative confusability graph. We describe quantum complexity as a\ngeneralised minimum semidefinite rank and, in the case of a graph operator\nsystem, as a quantum intersection number. The quantum complexity and a closely\nrelated quantum version of orthogonal rank turn out to be upper bounds for the\nShannon zero-error capacity of a quantum channel, and we construct examples for\nwhich these bounds beat the best previously known general upper bound for the\ncapacity of quantum channels, given by the quantum Lovász theta number.\n",
"title": "Complexity and capacity bounds for quantum channels"
} | null | null | null | null | true | null | 398 | null | Default | null | null |
null | {
"abstract": " During the ionization of atoms irradiated by linearly polarized intense laser\nfields, we find for the first time that the transverse momentum distribution of\nphotoelectrons can be well fitted by a squared zeroth-order Bessel function\nbecause of the quantum interference effect of Glory rescattering. The\ncharacteristic of the Bessel function is determined by the common angular\nmomentum of a bunch of semiclassical paths termed as Glory trajectories, which\nare launched with different nonzero initial transverse momenta distributed on a\nspecific circle in the momentum plane and finally deflected to the same\nasymptotic momentum, which is along the polarization direction, through\npost-tunneling rescattering. Glory rescattering theory (GRT) based on the\nsemiclassical path-integral formalism is developed to address this effect\nquantitatively. Our theory can resolve the long-standing discrepancies between\nexisting theories and experiments on the fringe location, predict the sudden\ntransition of the fringe structure in holographic patterns, and shed light on\nthe quantum interference aspects of low-energy structures in strong-field\natomic ionization.\n",
"title": "Quantum Interference of Glory Rescattering in Strong-Field Atomic Ionization"
} | null | null | null | null | true | null | 399 | null | Default | null | null |
null | {
"abstract": " We are interested in extending operators defined on positive measures, called\nhere transfunctions, to signed measures and vector measures. Our methods use a\nsomewhat nonstandard approach to measures and vector measures. The necessary\nbackground, including proofs of some auxiliary results, is included.\n",
"title": "On vector measures and extensions of transfunctions"
} | null | null | null | null | true | null | 400 | null | Default | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.