text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " We introduce a spectrum of monotone coarse invariants for metric measure\nspaces called Poincaré profiles. The two extremes of this spectrum\ndetermine the growth of the space, and the separation profile as defined by\nBenjamini--Schramm--Timár. In this paper we focus on properties of the\nPoincaré profiles of groups with polynomial growth, and of hyperbolic\nspaces, where we deduce a striking connection between these profiles and\nconformal dimension. One application of our results is that there is a\ncollection of hyperbolic Coxeter groups, indexed by a countable dense subset of\n$(1,\\infty)$, such that $G_s$ does not coarsely embed into $G_t$ whenever\n$s<t$.\n",
"title": "Poincaré profiles of groups and spaces"
} | null | null | null | null | true | null | 19401 | null | Default | null | null |
null | {
"abstract": " Coexistence of a new-type antiferromagnetic (AFM) state, the so-called\nhedgehog spin-vortex crystal (SVC), and superconductivity (SC) is evidenced by\n$^{75}$As nuclear magnetic resonance study on single-crystalline\nCaK(Fe$_{0.951}$Ni$_{0.049}$)$_4$As$_4$. The hedgehog SVC order is clearly\ndemonstrated by the direct observation of the internal magnetic induction along\nthe $c$ axis at the As1 site (close to K) and a zero net internal magnetic\ninduction at the As2 site (close to Ca) below an AFM ordering temperature\n$T_{\\rm N}$ $\\sim$ 52 K. The nuclear spin-lattice relaxation rate 1/$T_1$ shows\na distinct decrease below $T_{\\rm c}$ $\\sim$ 10 K, providing also unambiguous\nevidence for the microscopic coexistence. Furthermore, based on the analysis of\nthe 1/$T_1$ data, the hedgehog SVC-type spin correlations are found to be\nenhanced below $T$ $\\sim$ 150 K in the paramagnetic state. These results\nindicate the hedgehog SVC-type spin correlations play an important role for the\nappearance of SC in the new magnetic superconductor.\n",
"title": "NMR Study of the New Magnetic Superconductor CaK(Fe$0.951Ni0.049)4As4: Microscopic Coexistence of Hedgehog Spin-vortex Crystal and Superconductivity"
} | null | null | null | null | true | null | 19402 | null | Default | null | null |
null | {
"abstract": " Grigni and Hung~\\cite{GH12} conjectured that H-minor-free graphs have\n$(1+\\epsilon)$-spanners that are light, that is, of weight $g(|H|,\\epsilon)$\ntimes the weight of the minimum spanning tree for some function $g$. This\nconjecture implies the {\\em efficient} polynomial-time approximation scheme\n(PTAS) of the traveling salesperson problem in $H$-minor free graphs; that is,\na PTAS whose running time is of the form $2^{f(\\epsilon)}n^{O(1)}$ for some\nfunction $f$. The state of the art PTAS for TSP in H-minor-free-graphs has\nrunning time $n^{1/\\epsilon^c}$. We take a further step toward proving this\nconjecture by showing that if the bounded treewidth graphs have light greedy\nspanners, then the conjecture is true. We also prove that the greedy spanner of\na bounded pathwidth graph is light and discuss the possibility of extending our\nproof to bounded treewidth graphs.\n",
"title": "Light spanners for bounded treewidth graphs imply light spanners for $H$-minor-free graphs"
} | null | null | null | null | true | null | 19403 | null | Default | null | null |
null | {
"abstract": " Mexico City tracks ground-level ozone levels to assess compliance with\nnational ambient air quality standards and to prevent environmental health\nemergencies. Ozone levels show distinct daily patterns, within the city, and\nover the course of the year. To model these data, we use covariance models over\nspace, circular time, and linear time. We review existing models and develop\nnew classes of nonseparable covariance models of this type, models appropriate\nfor quasi-periodic data collected at many locations. With these covariance\nmodels, we use nearest-neighbor Gaussian processes to predict hourly ozone\nlevels at unobserved locations in April and May, the peak ozone season, to\ninfer compliance to Mexican air quality standards and to estimate respiratory\nhealth risk associated with ozone. Predicted compliance with air quality\nstandards and estimated respiratory health risk vary greatly over space and\ntime. In some regions, we predict exceedance of national standards for more\nthan a third of the hours in April and May. On many days, we predict that\nnearly all of Mexico City exceeds nationally legislated ozone thresholds at\nleast once. In peak regions, we estimate respiratory risk for ozone to be 55%\nhigher on average than the annual average risk and as much at 170% higher on\nsome days.\n",
"title": "Modeling Daily Seasonality of Mexico City Ozone using Nonseparable Covariance Models on Circles Cross Time"
} | null | null | null | null | true | null | 19404 | null | Default | null | null |
null | {
"abstract": " The elastic scattering cross sections for a slow electron by C2 and H2\nmolecules have been calculated within the framework of the non-overlapping\natomic potential model. For the amplitudes of the multiple electron scattering\nby a target the wave function of the molecular continuum is represented as a\ncombination of a plane wave and two spherical waves generated by the centers of\natomic spheres. This wave function obeys the Huygens-Fresnel principle\naccording to which the electron wave scattering by a system of two centers is\naccompanied by generation of two spherical waves; their interaction creates a\ndiffraction pattern far from the target. Each of the Huygens waves, in turn, is\na superposition of the partial spherical waves with different orbital angular\nmomenta l and their projections m. The amplitudes of these partial waves are\ndefined by the corresponding phases of electron elastic scattering by an\nisolated atomic potential. In numerical calculations the s- and p-phase shifts\nare taken into account. So the number of interfering electron waves is equal to\neight: two of which are the s-type waves and the remaining six waves are of the\np-type with different m values. The calculation of the scattering amplitudes in\nclosed form (rather than in the form of S-matrix expansion) is reduced to\nsolving a system of eight inhomogeneous algebraic equations. The differential\nand total cross sections of electron scattering by fixed-in-space molecules and\nrandomly oriented ones have been calculated as well. We conclude by discussing\nthe special features of the S-matrix method for the case of arbitrary\nnon-spherical potentials.\n",
"title": "Huygens-Fresnel Picture for Electron-Molecule Elastic Scattering"
} | null | null | null | null | true | null | 19405 | null | Default | null | null |
null | {
"abstract": " Predicting fine-grained interests of users with temporal behavior is\nimportant to personalization and information filtering applications. However,\nexisting interest prediction methods are incapable of capturing the subtle\ndegreed user interests towards particular items, and the internal time-varying\ndrifting attention of individuals is not studied yet. Moreover, the prediction\nprocess can also be affected by inter-personal influence, known as behavioral\nmutual infectivity. Inspired by point process in modeling temporal point\nprocess, in this paper we present a deep prediction method based on two\nrecurrent neural networks (RNNs) to jointly model each user's continuous\nbrowsing history and asynchronous event sequences in the context of inter-user\nbehavioral mutual infectivity. Our model is able to predict the fine-grained\ninterest from a user regarding a particular item and corresponding timestamps\nwhen an occurrence of event takes place. The proposed approach is more flexible\nto capture the dynamic characteristic of event sequences by using the temporal\npoint process to model event data and timely update its intensity function by\nRNNs. Furthermore, to improve the interpretability of the model, the attention\nmechanism is introduced to emphasize both intra-personal and inter-personal\nbehavior influence over time. Experiments on real datasets demonstrate that our\nmodel outperforms the state-of-the-art methods in fine-grained user interest\nprediction.\n",
"title": "When Point Process Meets RNNs: Predicting Fine-Grained User Interests with Mutual Behavioral Infectivity"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 19406 | null | Validated | null | null |
null | {
"abstract": " Quantitative extraction of high-dimensional mineable data from medical images\nis a process known as radiomics. Radiomics is foreseen as an essential\nprognostic tool for cancer risk assessment and the quantification of\nintratumoural heterogeneity. In this work, 1615 radiomic features (quantifying\ntumour image intensity, shape, texture) extracted from pre-treatment FDG-PET\nand CT images of 300 patients from four different cohorts were analyzed for the\nrisk assessment of locoregional recurrences (LR) and distant metastases (DM) in\nhead-and-neck cancer. Prediction models combining radiomic and clinical\nvariables were constructed via random forests and imbalance-adjustment\nstrategies using two of the four cohorts. Independent validation of the\nprediction and prognostic performance of the models was carried out on the\nother two cohorts (LR: AUC = 0.69 and CI = 0.67; DM: AUC = 0.86 and CI = 0.88).\nFurthermore, the results obtained via Kaplan-Meier analysis demonstrated the\npotential of radiomics for assessing the risk of specific tumour outcomes using\nmultiple stratification groups. This could have important clinical impact,\nnotably by allowing for a better personalization of chemo-radiation treatments\nfor head-and-neck cancer patients from different risk groups.\n",
"title": "Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer"
} | null | null | null | null | true | null | 19407 | null | Default | null | null |
null | {
"abstract": " Recently, two new indicators (Equalized Mean-based Normalized Proportion\nCited, EMNPC; Mean-based Normalized Proportion Cited, MNPC) were proposed which\nare intended for sparse scientometrics data. The indicators compare the\nproportion of mentioned papers (e.g. on Facebook) of a unit (e.g., a researcher\nor institution) with the proportion of mentioned papers in the corresponding\nfields and publication years (the expected values). In this study, we propose a\nthird indicator (Mantel-Haenszel quotient, MHq) belonging to the same indicator\nfamily. The MHq is based on the MH analysis - an established method in\nstatistics for the comparison of proportions. We test (using citations and\nassessments by peers, i.e. F1000Prime recommendations) if the three indicators\ncan distinguish between different quality levels as defined on the basis of the\nassessments by peers. Thus, we test their convergent validity. We find that the\nindicator MHq is able to distinguish between the quality levels in most cases\nwhile MNPC and EMNPC are not. Since the MHq is shown in this study to be a\nvalid indicator, we apply it to six types of zero-inflated altmetrics data and\ntest whether different altmetrics sources are related to quality. The results\nfor the various altmetrics demonstrate that the relationship between altmetrics\n(Wikipedia, Facebook, blogs, and news data) and assessments by peers is not as\nstrong as the relationship between citations and assessments by peers.\nActually, the relationship between citations and peer assessments is about two\nto three times stronger than the association between altmetrics and assessments\nby peers.\n",
"title": "Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data"
} | null | null | [
"Computer Science"
]
| null | true | null | 19408 | null | Validated | null | null |
null | {
"abstract": " We show that Willwacher's cyclic formality theorem can be extended to\npreserve natural Gravity operations on cyclic multivector fields and cyclic\nmultidifferential operators. We express this in terms of a homotopy Gravity\nquasi-isomorphism with explicit local formulas. For this, we develop operadic\ntools related to mixed complexes and cyclic homology and prove that the operad\n$\\mathsf M_\\circlearrowleft$ of natural operations on cyclic operators is\nformal and hence quasi-isomorphic to the Gravity operad.\n",
"title": "Gravity Formality"
} | null | null | null | null | true | null | 19409 | null | Default | null | null |
null | {
"abstract": " Process Control Systems (PCSs) are the operating core of Critical\nInfrastructures (CIs). As such, anomaly detection has been an active research\nfield to ensure CI normal operation. Previous approaches have leveraged network\nlevel data for anomaly detection, or have disregarded the existence of process\ndisturbances, thus opening the possibility of mislabelling disturbances as\nattacks and vice versa. In this paper we present an anomaly detection and\ndiagnostic system based on Multivariate Statistical Process Control (MSPC),\nthat aims to distinguish between attacks and disturbances. For this end, we\nexpand traditional MSPC to monitor process level and controller level data. We\nevaluate our approach using the Tennessee-Eastman process. Results show that\nour approach can be used to distinguish disturbances from intrusions to a\ncertain extent and we conclude that the proposed approach can be extended with\nother sources of data for improving results.\n",
"title": "On the Feasibility of Distinguishing Between Process Disturbances and Intrusions in Process Control Systems Using Multivariate Statistical Process Control"
} | null | null | null | null | true | null | 19410 | null | Default | null | null |
null | {
"abstract": " For the quantum kinetic system modelling the Bose-Einstein Condensate that\naccounts for interactions between condensate and excited atoms, we use the\nChapman-Enskog expansion to derive its hydrodynamic approximations, include\nboth Euler and Navier-Stokes approximations. The hydrodynamic approximations\ndescribe not only the macroscopic behavior of the BEC but also its coupling\nwith the non-condensates, which agrees with Landau's two fluid theory.\n",
"title": "Quantum hydrodynamic approximations to the finite temperature trapped Bose gases"
} | null | null | null | null | true | null | 19411 | null | Default | null | null |
null | {
"abstract": " Over the past few years, the futures market has been successfully developing\nin the North-West region. Futures markets are one of the most effective and\nliquid-visible trading mechanisms. A large number of buyers are forced to\ncompete with each other and raise their prices. A large number of sellers make\nthem reduce prices. Thus, the gap between the prices of offers of buyers and\nsellers is reduced due to high competition, and this is a good criterion for\nthe liquidity of the market. This high degree of liquidity contributed to the\nfact that futures trading took such an important role in commerce and finance.\nA multi-step, non-cooperative n persons game is formalized and studied\n",
"title": "Game-theoretic dynamic investment model with incomplete information: futures contracts"
} | null | null | null | null | true | null | 19412 | null | Default | null | null |
null | {
"abstract": " Deep neural networks are playing an important role in state-of-the-art visual\nrecognition. To represent high-level visual concepts, modern networks are\nequipped with large convolutional layers, which use a large number of filters\nand contribute significantly to model complexity. For example, more than half\nof the weights of AlexNet are stored in the first fully-connected layer (4,096\nfilters).\nWe formulate the function of a convolutional layer as learning a large visual\nvocabulary, and propose an alternative way, namely Deep Collaborative Learning\n(DCL), to reduce the computational complexity. We replace a convolutional layer\nwith a two-stage DCL module, in which we first construct a couple of smaller\nconvolutional layers individually, and then fuse them at each spatial position\nto consider feature co-occurrence. In mathematics, DCL can be explained as an\nefficient way of learning compositional visual concepts, in which the\nvocabulary size increases exponentially while the model complexity only\nincreases linearly. We evaluate DCL on a wide range of visual recognition\ntasks, including a series of multi-digit number classification datasets, and\nsome generic image classification datasets such as SVHN, CIFAR and ILSVRC2012.\nWe apply DCL to several state-of-the-art network structures, improving the\nrecognition accuracy meanwhile reducing the number of parameters (16.82% fewer\nin AlexNet).\n",
"title": "Deep Collaborative Learning for Visual Recognition"
} | null | null | null | null | true | null | 19413 | null | Default | null | null |
null | {
"abstract": " The public transports provide an ideal means to enable contagious diseases\ntransmission. This paper introduces a novel idea to detect co-location of\npeople in such environment using just the ubiquitous geomagnetic field sensor\non the smart phone. Essentially, given that all passengers must share the same\njourney between at least two consecutive stations, we have a long window to\nmatch the user trajectory. Our idea was assessed over a painstakingly survey of\nover 150 kilometres of travelling distance, covering different parts of London,\nusing the overground trains, the underground tubes and the buses.\n",
"title": "Co-location Epidemic Tracking on London Public Transports Using Low Power Mobile Magnetometer"
} | null | null | null | null | true | null | 19414 | null | Default | null | null |
null | {
"abstract": " Let $\\mathcal{P}_r$ denote an almost-prime with at most $r$ prime factors,\ncounted according to multiplicity. In this paper, it is proved that, for\n$12\\leqslant b\\leqslant 35$ and for every sufficiently large odd integer $N$,\nthe equation \\begin{equation*}\nN=x^2+p_1^3+p_2^3+p_3^3+p_4^3+p_5^4+p_6^b \\end{equation*} is solvable with\n$x$ being an almost-prime $\\mathcal{P}_{r(b)}$ and the other variables primes,\nwhere $r(b)$ is defined in the Theorem. This result constitutes an improvement\nupon that of Lü and Mu.\n",
"title": "Waring-Goldbach Problem: One Square, Four Cubes and Higher Powers"
} | null | null | null | null | true | null | 19415 | null | Default | null | null |
null | {
"abstract": " We use recent results by Bainbridge-Chen-Gendron-Grushevsky-Moeller on\ncompactifications of strata of abelian differentials to give a comprehensive\nsolution to the realizability problem for effective tropical canonical divisors\nin equicharacteristic zero. Given a pair $(\\Gamma, D)$ consisting of a stable\ntropical curve $\\Gamma$ and a divisor $D$ in the canonical linear system on\n$\\Gamma$, we give a purely combinatorial condition to decide whether there is a\nsmooth curve $X$ over a non-Archimedean field whose stable reduction has\n$\\Gamma$ as its dual tropical curve together with a effective canonical divisor\n$K_X$ that specializes to $D$. Along the way, we develop a moduli-theoretic\nframework to understand Baker's specialization of divisors from algebraic to\ntropical curves as a natural toroidal tropicalization map in the sense of\nAbramovich-Caporaso-Payne.\n",
"title": "Realizability of tropical canonical divisors"
} | null | null | [
"Mathematics"
]
| null | true | null | 19416 | null | Validated | null | null |
null | {
"abstract": " Interference arises when an individual's potential outcome depends on the\nindividual treatment level, but also on the treatment level of others. A common\nassumption in the causal inference literature in the presence of interference\nis partial interference, implying that the population can be partitioned in\nclusters of individuals whose potential outcomes only depend on the treatment\nof units within the same cluster. Previous literature has defined average\npotential outcomes under counterfactual scenarios where treatments are randomly\nallocated to units within a cluster. However, within clusters there may be\nunits that are more or less likely to receive treatment based on covariates or\nneighbors' treatment. We define new estimands that describe average potential\noutcomes for realistic counterfactual treatment allocation programs, extending\nexisting estimands to take into consideration the units' covariates and\ndependence between units' treatment assignment. We further propose entirely new\nestimands for population-level interventions over the collection of clusters,\nwhich correspond in the motivating setting to regulations at the federal (vs.\ncluster or regional) level. We discuss these estimands, propose unbiased\nestimators and derive asymptotic results as the number of clusters grows.\nFinally, we estimate effects in a comparative effectiveness study of power\nplant emission reduction technologies on ambient ozone pollution.\n",
"title": "Causal inference for interfering units with cluster and population level treatment allocation programs"
} | null | null | null | null | true | null | 19417 | null | Default | null | null |
null | {
"abstract": " We present a unifying framework to solve several computer vision problems\nwith event cameras: motion, depth and optical flow estimation. The main idea of\nour framework is to find the point trajectories on the image plane that are\nbest aligned with the event data by maximizing an objective function: the\ncontrast of an image of warped events. Our method implicitly handles data\nassociation between the events, and therefore, does not rely on additional\nappearance information about the scene. In addition to accurately recovering\nthe motion parameters of the problem, our framework produces motion-corrected\nedge-like images with high dynamic range that can be used for further scene\nanalysis. The proposed method is not only simple, but more importantly, it is,\nto the best of our knowledge, the first method that can be successfully applied\nto such a diverse set of important vision tasks with event cameras.\n",
"title": "A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation"
} | null | null | null | null | true | null | 19418 | null | Default | null | null |
null | {
"abstract": " Machine learning models are notoriously difficult to interpret and debug.\nThis is particularly true of neural networks. In this work, we introduce\nautomated software testing techniques for neural networks that are well-suited\nto discovering errors which occur only for rare inputs. Specifically, we\ndevelop coverage-guided fuzzing (CGF) methods for neural networks. In CGF,\nrandom mutations of inputs to a neural network are guided by a coverage metric\ntoward the goal of satisfying user-specified constraints. We describe how fast\napproximate nearest neighbor algorithms can provide this coverage metric. We\nthen discuss the application of CGF to the following goals: finding numerical\nerrors in trained neural networks, generating disagreements between neural\nnetworks and quantized versions of those networks, and surfacing undesirable\nbehavior in character level language models. Finally, we release an open source\nlibrary called TensorFuzz that implements the described techniques.\n",
"title": "TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing"
} | null | null | null | null | true | null | 19419 | null | Default | null | null |
null | {
"abstract": " Inferring interactions between processes promises deeper insight into\nmechanisms underlying network phenomena. Renormalised partial directed\ncoherence (rPDC) is a frequency-domain representation of the concept of Granger\ncausality while directed partial correlation (DPC) is an alternative approach\nfor quantifying Granger causality in the time domain. Both methodologies have\nbeen successfully applied to neurophysiological signals for detecting directed\nrelationships. This paper introduces their application to climatological time\nseries. We first discuss the application to ENSO -- Monsoon interaction, and\nthen apply the methodologies to the more challenging air-sea interaction in the\nSouth Atlantic Convergence Zone (SACZ). While in the first case the results\nobtained are fully consistent with present knowledge in climate modeling, in\nthe second case the results are, as expected, less clear, and to fully\nelucidate the SACZ air-sea interaction, further investigations on the\nspecificity and sensitivity of these methodologies are needed.\n",
"title": "Inferring directed climatic interactions with renormalized partial directed coherence and directed partial correlation"
} | null | null | null | null | true | null | 19420 | null | Default | null | null |
null | {
"abstract": " The Alvarez-Macovski method [Alvarez, R. E and Macovski, A.,\n\"Energy-selective reconstructions in X-ray computerized tomography\", Phys. Med.\nBiol. (1976), 733--44] requires the inversion of the transformation from the\nline integrals of the basis set coefficients to measurements with multiple\nx-ray spectra. Analytical formulas for invertibility of the transformation from\ntwo measurements to two line integrals are derived. It is found that\nnon-invertible systems have near zero Jacobian determinants on a nearly\nstraight line in the line integrals plane. Formulas are derived for the points\nwhere the line crosses the axes, thus determining the line. Additional formulas\nare derived for the values of the terms of the Jacobian determinant at the\nendpoints of the line of non-invertibility. The formulas are applied to a set\nof spectra including one suggested by Levine that is not invertible as well as\nsimilar spectra that are invertible and voltage switched x-ray tube spectra\nthat are also invertible. An iterative inverse transformation algorithm\nexhibits large errors with non-invertible spectra.\n",
"title": "Conditions for the invertibility of dual energy data"
} | null | null | null | null | true | null | 19421 | null | Default | null | null |
null | {
"abstract": " Using our results about Lorentzian Kac--Moody algebras and arithmetic mirror\nsymmetry, we give six series of examples of lattice-polarized K3 surfaces with\nautomorphic discriminant.\n",
"title": "Examples of lattice-polarized K3 surfaces with automorphic discriminant, and Lorentzian Kac--Moody algebras"
} | null | null | null | null | true | null | 19422 | null | Default | null | null |
null | {
"abstract": " We study the elementary characteristics of turbulence in a quantum ferrofluid\nthrough the context of a dipolar Bose gas condensing from a highly\nnon-equilibrium thermal state. Our simulations reveal that the dipolar\ninteractions drive the emergence of polarized turbulence and density\ncorrugations. The superfluid vortex lines and density fluctuations adopt a\ncolumnar or stratified configuration, depending on the sign of the dipolar\ninteractions, with the vortices tending to form in the low density regions to\nminimize kinetic energy. When the interactions are dominantly dipolar, the\ndecay of vortex line length is enhanced, closely following a $t^{-3/2}$\nbehaviour. This system poses exciting prospects for realizing stratified\nquantum turbulence and new levels of generating and controlling turbulence\nusing magnetic fields.\n",
"title": "Quantum ferrofluid turbulence"
} | null | null | null | null | true | null | 19423 | null | Default | null | null |
null | {
"abstract": " An extensive empirical literature documents a generally negative correlation,\nnamed the \"leverage effect,\" between asset returns and changes of volatility.\nIt is more challenging to establish such a return-volatility relationship for\njumps in high-frequency data. We propose new nonparametric methods to assess\nand test for a discontinuous leverage effect --- i.e. a relation between\ncontemporaneous jumps in prices and volatility. The methods are robust to\nmarket microstructure noise and build on a newly developed price-jump\nlocalization and estimation procedure. Our empirical investigation of six years\nof transaction data from 320 NASDAQ firms displays no unconditional negative\ncorrelation between price and volatility cojumps. We show, however, that there\nis a strong relation between price-volatility cojumps if one conditions on the\nsign of price jumps and whether the price jumps are market-wide or\nidiosyncratic. Firms' volatility levels strongly explain the cross-section of\ndiscontinuous leverage while debt-to-equity ratios have no significant\nexplanatory power.\n",
"title": "Estimation of the discontinuous leverage effect: Evidence from the NASDAQ order book"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 19424 | null | Validated | null | null |
null | {
"abstract": " Motivated by expansion in Cayley graphs, we show that there exist infinitely\nmany groups $G$ with a nontrivial irreducible unitary representation whose\naverage over every set of $o(\\log\\log|G|)$ elements of $G$ has operator norm $1\n- o(1)$. This answers a question of Lovett, Moore, and Russell, and strengthens\ntheir negative answer to a question of Wigderson.\nThe construction is the affine group of $\\mathbb{F}_p$ and uses the fact that\nfor every $A \\subset \\mathbb{F}_p\\setminus\\{0\\}$, there is a set of size\n$\\exp(\\exp(O(|A|)))$ that is almost invariant under both additive and\nmultiplicatpive translations by elements of $A$.\n",
"title": "Group representations that resist worst-case sampling"
} | null | null | [
"Mathematics"
]
| null | true | null | 19425 | null | Validated | null | null |
null | {
"abstract": " We introduce a class of normal complex spaces having only mild sin-gularities\n(close to quotient singularities) for which we generalize the notion of a\n(analytic) fundamental class for an analytic cycle and also the notion of a\nrelative fundamental class for an analytic family of cycles. We also generalize\nto these spaces the geometric intersection theory for analytic cycles with\nrational positive coefficients and show that it behaves well with respect to\nanalytic families of cycles. We prove that this intersection theory has most of\nthe usual properties of the standard geometric intersection theory on complex\nmanifolds, but with the exception that the intersection cycle of two cycles\nwith positive integral coefficients that intersect properly may have rational\ncoefficients. AMS classification. 32 C 20-32 C 25-32 C 36.\n",
"title": "On the nearly smooth complex spaces"
} | null | null | null | null | true | null | 19426 | null | Default | null | null |
null | {
"abstract": " With the rapid increase of compound databases available in medicinal and\nmaterial science, there is a growing need for learning representations of\nmolecules in a semi-supervised manner. In this paper, we propose an\nunsupervised hierarchical feature extraction algorithm for molecules (or more\ngenerally, graph-structured objects with fixed number of types of nodes and\nedges), which is applicable to both unsupervised and semi-supervised tasks. Our\nmethod extends recently proposed Paragraph Vector algorithm and incorporates\nneural message passing to obtain hierarchical representations of subgraphs. We\napplied our method to an unsupervised task and demonstrated that it outperforms\nexisting proposed methods in several benchmark datasets. We also experimentally\nshowed that semi-supervised tasks enhanced predictive performance compared with\nsupervised ones with labeled molecules only.\n",
"title": "Semi-supervised learning of hierarchical representations of molecules using neural message passing"
} | null | null | null | null | true | null | 19427 | null | Default | null | null |
null | {
"abstract": " We study the sample covariance matrix for real-valued data with general\npopulation covariance, as well as MANOVA-type covariance estimators in variance\ncomponents models under null hypotheses of global sphericity. In the limit as\nmatrix dimensions increase proportionally, the asymptotic spectra of such\nestimators may have multiple disjoint intervals of support, possibly\nintersecting the negative half line. We show that the distribution of the\nextremal eigenvalue at each regular edge of the support has a GOE Tracy-Widom\nlimit. Our proof extends a comparison argument of Ji Oon Lee and Kevin\nSchnelli, replacing a continuous Green function flow by a discrete Lindeberg\nswapping scheme.\n",
"title": "Tracy-Widom at each edge of real covariance and MANOVA estimators"
} | null | null | null | null | true | null | 19428 | null | Default | null | null |
null | {
"abstract": " Lung nodule classification is a class imbalanced problem because nodules are\nfound with much lower frequency than non-nodules. In the class imbalanced\nproblem, conventional classifiers tend to be overwhelmed by the majority class\nand ignore the minority class. We therefore propose cascaded convolutional\nneural networks to cope with the class imbalanced problem. In the proposed\napproach, multi-stage convolutional neural networks that perform as\nsingle-sided classifiers filter out obvious non-nodules. Successively, a\nconvolutional neural network trained with a balanced data set calculates nodule\nprobabilities. The proposed method achieved the sensitivity of 92.4\\% and 94.5%\nat 4 and 8 false positives per scan in Free Receiver Operating Characteristics\n(FROC) curve analysis, respectively.\n",
"title": "Multi-stage Neural Networks with Single-sided Classifiers for False Positive Reduction and its Evaluation using Lung X-ray CT Images"
} | null | null | [
"Computer Science"
]
| null | true | null | 19429 | null | Validated | null | null |
null | {
"abstract": " We introduce and examine a collection of unusual electromagnetic\ndisturbances. Each of these is an exact, monochromatic solution of Maxwell's\nequations in free space with looped electric and magnetic field lines of finite\nextent and a localised appearance in all three spatial dimensions. Included are\nthe first explicit examples of monochromatic electromagnetic knots. We also\nconsider the generation of our unusual electromagnetic disturbances in the\nlaboratory, at both low and high frequencies, and highlight possible directions\nfor future research, including the use of unusual electromagnetic disturbances\nas the basis of a new form of three-dimensional display.\n",
"title": "Monochromatic knots and other unusual electromagnetic disturbances: light localised in 3D"
} | null | null | null | null | true | null | 19430 | null | Default | null | null |
null | {
"abstract": " Skorobogatov constructed a bielliptic surface which is a counterexample to\nthe Hasse principle not explained by the Brauer-Manin obstruction. We show that\nthis surface has a $0$-cycle of degree 1, as predicted by a conjecture of\nColliot-Thélène.\n",
"title": "Zero-cycles of degree one on Skorobogatov's bielliptic surface"
} | null | null | null | null | true | null | 19431 | null | Default | null | null |
null | {
"abstract": " We define \"Locally Nameless Permutation Types\", which fuse permutation types\nas used in Nominal Isabelle with the locally nameless representation. We show\nthat this combination is particularly useful when formalizing programming\nlanguages where bound names may become free during execution (\"extrusion\"),\ncommon in process calculi. It inherits the generic definition of permutations\nand support, and associated lemmas, from the Nominal approach, and the ability\nto stay close to pencil-and-paper proofs from the locally nameless approach. We\nexplain how to use cofinite quantification in this setting, show why reasoning\nabout renaming is more important here than in languages without extrusion, and\nprovide results about infinite support, necessary when reasoning about\ncountable choice.\n",
"title": "Locally Nameless Permutation Types"
} | null | null | [
"Computer Science"
]
| null | true | null | 19432 | null | Validated | null | null |
null | {
"abstract": " We study dimension-free $L^p$ inequalities for $r$-variations of the\nHardy--Littlewood averaging operators defined over symmetric convex bodies in\n$\\mathbb R^d$.\n",
"title": "On dimension-free variational inequalities for averaging operators in $\\mathbb R^d$"
} | null | null | null | null | true | null | 19433 | null | Default | null | null |
null | {
"abstract": " The one-particle density matrix of the one-dimensional Tonks-Girardeau gas\nwith inhomogeneous density profile is calculated, thanks to a recent\nobservation that relates this system to a two-dimensional conformal field\ntheory in curved space. The result is asymptotically exact in the limit of\nlarge particle density and small density variation, and holds for arbitrary\ntrapping potentials. In the particular case of a harmonic trap, we recover a\nformula obtained by Forrester et al. [Phys. Rev. A 67, 043607 (2003)] from a\ndifferent method.\n",
"title": "One-particle density matrix of trapped one-dimensional impenetrable bosons from conformal invariance"
} | null | null | null | null | true | null | 19434 | null | Default | null | null |
null | {
"abstract": " We propose a 2D generalization to the $M$-band case of the dual-tree\ndecomposition structure (initially proposed by N. Kingsbury and further\ninvestigated by I. Selesnick) based on a Hilbert pair of wavelets. We\nparticularly address (\\textit{i}) the construction of the dual basis and\n(\\textit{ii}) the resulting directional analysis. We also revisit the necessary\npre-processing stage in the $M$-band case. While several reconstructions are\npossible because of the redundancy of the representation, we propose a new\noptimal signal reconstruction technique, which minimizes potential estimation\nerrors. The effectiveness of the proposed $M$-band decomposition is\ndemonstrated via denoising comparisons on several image types (natural,\ntexture, seismics), with various $M$-band wavelets and thresholding strategies.\nSignificant improvements in terms of both overall noise reduction and direction\npreservation are observed.\n",
"title": "Image Analysis Using a Dual-Tree $M$-Band Wavelet Transform"
} | null | null | null | null | true | null | 19435 | null | Default | null | null |
null | {
"abstract": " We introduce the Nonlinear Cauchy-Riemann equations as Bäcklund\ntransformations for several nonlinear and linear partial differential\nequations. From these equations we treat in details the Laplace and the\nLiouville equations by deriving general solution for the nonlinear Liouville\nequation. By Möbius transformation we relate solutions for the Poincare\nmodel of hyperbolic geometry, the Klein model in half-plane and the\npseudo-sphere. Conformal form of the constant curvature metrics in these\ngeometries, stereographic projections and special solutions are discussed. Then\nwe introduce the hyperbolic analog of the Riemann sphere, which we call the\nRiemann pseudosphere. We identify point at infinity on this pseudosphere and\nshow that it can be used in complex analysis as an alternative to usual Riemann\nsphere to extend the complex plane. Interpretation of symmetric and antipodal\npoints on both, the Riemann sphere and the Riemann pseudo-sphere, are given. By\nMöbius transformation and homogenous coordinates, the most general solution\nof Liouville equation as discussed by Crowdy is derived.\n",
"title": "Nonlinear Cauchy-Riemann Equations and Liouville Equation For Conformal Metrics"
} | null | null | null | null | true | null | 19436 | null | Default | null | null |
null | {
"abstract": " Deep Neural Networks have been shown to succeed at a range of natural\nlanguage tasks such as machine translation and text summarization. While tasks\non source code (ie, formal languages) have been considered recently, most work\nin this area does not attempt to capitalize on the unique opportunities offered\nby its known syntax and structure. In this work, we introduce SmartPaste, a\nfirst task that requires to use such information. The task is a variant of the\nprogram repair problem that requires to adapt a given (pasted) snippet of code\nto surrounding, existing source code. As first solutions, we design a set of\ndeep neural models that learn to represent the context of each variable\nlocation and variable usage in a data flow-sensitive way. Our evaluation\nsuggests that our models can learn to solve the SmartPaste task in many cases,\nachieving 58.6% accuracy, while learning meaningful representation of variable\nusages.\n",
"title": "SmartPaste: Learning to Adapt Source Code"
} | null | null | null | null | true | null | 19437 | null | Default | null | null |
null | {
"abstract": " Participants enrolled into randomized controlled trials (RCTs) often do not\nreflect real-world populations. Previous research in how best to translate RCT\nresults to target populations has focused on weighting RCT data to look like\nthe target data. Simulation work, however, has suggested that an outcome model\napproach may be preferable. Here we describe such an approach using source data\nfrom the 2x2 factorial NAVIGATOR trial which evaluated the impact of valsartan\nand nateglinide on cardiovascular outcomes and new-onset diabetes in a\npre-diabetic population. Our target data consisted of people with pre-diabetes\nserviced at our institution. We used Random Survival Forests to develop\nseparate outcome models for each of the 4 treatments, estimating the 5-year\nrisk difference for progression to diabetes and estimated the treatment effect\nin our local patient populations, as well as sub-populations, and the results\ncompared to the traditional weighting approach. Our models suggested that the\ntreatment effect for valsartan in our patient population was the same as in the\ntrial, whereas for nateglinide treatment effect was stronger than observed in\nthe original trial. Our effect estimates were more efficient than the weighting\napproach.\n",
"title": "An Outcome Model Approach to Translating a Randomized Controlled Trial Results to a Target Population"
} | null | null | null | null | true | null | 19438 | null | Default | null | null |
null | {
"abstract": " Effects of subgrid-scale gravity waves (GWs) on the diurnal migrating tides\nare investigated from the mesosphere to the upper thermosphere for September\nequinox conditions, using a general circulation model coupled with the extended\nspectral nonlinear GW parameterization of Yiğit et al (2008). Simulations\nwith GW effects cut-off above the turbopause and included in the entire\nthermosphere have been conducted. GWs appreciably impact the mean circulation\nand cool the thermosphere down by up to 12-18%. GWs significantly affect the\nwinds modulated by the diurnal migrating tide, in particular in the\nlow-latitude mesosphere and lower thermosphere and in the high-latitude\nthermosphere. These effects depend on the mutual correlation of the diurnal\nphases of the GW forcing and tides: GWs can either enhance or reduce the tidal\namplitude. In the low-latitude MLT, the correlation between the direction of\nthe deposited GW momentum and the tidal phase is positive due to propagation of\na broad spectrum of GW harmonics through the alternating winds. In the Northern\nHemisphere high-latitude thermosphere, GWs act against the tide due to an\nanti-correlation of tidal wind and GW momentum, while in the Southern\nhigh-latitudes they weakly enhance the tidal amplitude via a combination of a\npartial correlation of phases and GW-induced changes of the circulation. The\nvariable nature of GW effects on the thermal tide can be captured in GCMs\nprovided that a GW parameterization (1) considers a broad spectrum of\nharmonics, (2) properly describes their propagation, and (3) correctly accounts\nfor the physics of wave breaking/saturation.\n",
"title": "Influence of parameterized small-scale gravity waves on the migrating diurnal tide in Earth's thermosphere"
} | null | null | null | null | true | null | 19439 | null | Default | null | null |
null | {
"abstract": " Cricket is a game played between two teams which consists of eleven players\neach. Nowadays cricket game is becoming more and more popular in Bangladesh and\nother South Asian Countries. Before a match people are very enthusiastic about\nteam squads and \"Which players are playing today?\", \"How well will MR. X\nperform today?\" are the million dollar questions before a big match. This\narticle will propose a method using statistical data analysis for recommending\na national team squad. Recent match scorecards for domestic and international\nmatches played by a specific team in recent years are used to recommend the\nideal squad. Impact point or rating points of all players in different\nconditions are calculated and the best ones from different categories are\nchosen to form optimal line-ups. To evaluate the efficiency of impact point\nsystem, it will be tested with real time match data to see how much accuracy it\ngives.\n",
"title": "A Statistical Model for Ideal Team Selection for A National Cricket Squad"
} | null | null | null | null | true | null | 19440 | null | Default | null | null |
null | {
"abstract": " We study basic geometric properties of some group analogue of affine Springer\nfibers and compare with the classical Lie algebra affine Springer fibers. The\nmain purpose is to formulate a conjecture that relates the number of\nirreducible components of such varieties for a reductive group $G$ to certain\nweight multiplicities defined by the Langlands dual group $\\hat{G}$. We prove\nour conjecture in the case of unramified conjugacy class.\n",
"title": "The geometry of some generalized affine Springer fibers"
} | null | null | null | null | true | null | 19441 | null | Default | null | null |
null | {
"abstract": " Isogeometric analysis (IGA) is used to simulate a permanent magnet\nsynchronous machine. IGA uses non-uniform rational B-splines to parametrise the\ndomain and to approximate the solution space, thus allowing for the exact\ndescription of the geometries even on the coarsest level of mesh refinement.\nGiven the properties of the isogeometric basis functions, this choice\nguarantees a higher accuracy than the classical finite element method.\nFor dealing with the different stator and rotor topologies, the domain is\nsplit into two non-overlapping parts on which Maxwell's equations are solved\nindependently in the context of a classical Dirichlet-to-Neumann domain\ndecomposition scheme. The results show good agreement with the ones obtained by\nthe classical finite element approach.\n",
"title": "Modelling of a Permanent Magnet Synchronous Machine Using Isogeometric Analysis"
} | null | null | null | null | true | null | 19442 | null | Default | null | null |
null | {
"abstract": " In this paper, we study behavior of bidders in an experimental launch of a\nnew advertising auction platform by Zillow, as Zillow switched from negotiated\ncontracts to using auctions in several geographically isolated markets. A\nunique feature of this experiment is that the bidders in this market are real\nestate agents that bid on their own behalf, not using third-party\nintermediaries. To help bidders, Zillow also provided a recommendation tool\nthat suggested a bid for each bidder.\nOur main focus in this paper is on the decisions of bidders whether or not to\nadopt the platform-provided bid recommendation. We observe that a significant\nproportion of bidders do not use the recommended bid. Using the bid history of\nthe agents we infer their value, and compare the agents' regret with their\nactual bidding history with results they would have obtained following the\nrecommendation. We find that for half of the agents not following the\nrecommendation, the increased effort of experimenting with alternate bids\nresults in increased regret, i.e., they get decreased net value out of the\nsystem. The proportion of agents not following the recommendation slowly\ndeclines as markets mature, but it remains large in most markets that we\nobserve. We argue that the main reason for this phenomenon is the lack of trust\nin the platform-provided tool.\nOur work provides an empirical insight into possible design choices for\nauction-based online advertising platforms. While search advertising platforms\n(such as Google or Bing) allow bidders to submit bids on their own, many\ndisplay advertising platforms (such as Facebook) optimize bids on bidders'\nbehalf and eliminate the need for bids. Our empirical analysis shows that the\nlatter approach is preferred for markets where bidders are individuals, who\ndon't have access to third party tools, and who may question the fairness of\nplatform-provided suggestions.\n",
"title": "Learning and Trust in Auction Markets"
} | null | null | null | null | true | null | 19443 | null | Default | null | null |
null | {
"abstract": " The internet has become a central medium through which `networked publics'\nexpress their opinions and engage in debate. Offensive comments and personal\nattacks can inhibit participation in these spaces. Automated content moderation\naims to overcome this problem using machine learning classifiers trained on\nlarge corpora of texts manually annotated for offence. While such systems could\nhelp encourage more civil debate, they must navigate inherently normatively\ncontestable boundaries, and are subject to the idiosyncratic norms of the human\nraters who provide the training data. An important objective for platforms\nimplementing such measures might be to ensure that they are not unduly biased\ntowards or against particular norms of offence. This paper provides some\nexploratory methods by which the normative biases of algorithmic content\nmoderation systems can be measured, by way of a case study using an existing\ndataset of comments labelled for offence. We train classifiers on comments\nlabelled by different demographic subsets (men and women) to understand how\ndifferences in conceptions of offence between these groups might affect the\nperformance of the resulting models on various test sets. We conclude by\ndiscussing some of the ethical choices facing the implementers of algorithmic\nmoderation systems, given various desired levels of diversity of viewpoints\namongst discussion participants.\n",
"title": "Like trainer, like bot? Inheritance of bias in algorithmic content moderation"
} | null | null | [
"Computer Science"
]
| null | true | null | 19444 | null | Validated | null | null |
null | {
"abstract": " Representation learning is at the heart of what makes deep learning\neffective. In this work, we introduce a new framework for representation\nlearning that we call \"Holographic Neural Architectures\" (HNAs). In the same\nway that an observer can experience the 3D structure of a holographed object by\nlooking at its hologram from several angles, HNAs derive Holographic\nRepresentations from the training set. These representations can then be\nexplored by moving along a continuous bounded single dimension. We show that\nHNAs can be used to make generative networks, state-of-the-art regression\nmodels and that they are inherently highly resistant to noise. Finally, we\nargue that because of their denoising abilities and their capacity to\ngeneralize well from very few examples, models based upon HNAs are particularly\nwell suited for biological applications where training examples are rare or\nnoisy.\n",
"title": "Holographic Neural Architectures"
} | null | null | null | null | true | null | 19445 | null | Default | null | null |
null | {
"abstract": " Topological Data Analysis (tda) is a recent and fast growing eld providing a\nset of new topological and geometric tools to infer relevant features for\npossibly complex data. This paper is a brief introduction, through a few\nselected topics, to basic fundamental and practical aspects of tda for non\nexperts. 1 Introduction and motivation Topological Data Analysis (tda) is a\nrecent eld that emerged from various works in applied (algebraic) topology and\ncomputational geometry during the rst decade of the century. Although one can\ntrace back geometric approaches for data analysis quite far in the past, tda\nreally started as a eld with the pioneering works of Edelsbrunner et al. (2002)\nand Zomorodian and Carlsson (2005) in persistent homology and was popularized\nin a landmark paper in 2009 Carlsson (2009). tda is mainly motivated by the\nidea that topology and geometry provide a powerful approach to infer robust\nqualitative, and sometimes quantitative, information about the structure of\ndata-see, e.g. Chazal (2017). tda aims at providing well-founded mathematical,\nstatistical and algorithmic methods to infer, analyze and exploit the complex\ntopological and geometric structures underlying data that are often represented\nas point clouds in Euclidean or more general metric spaces. During the last few\nyears, a considerable eort has been made to provide robust and ecient data\nstructures and algorithms for tda that are now implemented and available and\neasy to use through standard libraries such as the Gudhi library (C++ and\nPython) Maria et al. (2014) and its R software interface Fasy et al. (2014a).\nAlthough it is still rapidly evolving, tda now provides a set of mature and\necient tools that can be used in combination or complementary to other data\nsciences tools. The tdapipeline. tda has recently known developments in various\ndirections and application elds. There now exist a large variety of methods\ninspired by topological and geometric approaches. Providing a complete overview\nof all these existing approaches is beyond the scope of this introductory\nsurvey. However, most of them rely on the following basic and standard pipeline\nthat will serve as the backbone of this paper: 1. The input is assumed to be a\nnite set of points coming with a notion of distance-or similarity between them.\nThis distance can be induced by the metric in the ambient space (e.g. the\nEuclidean metric when the data are embedded in R d) or come as an intrinsic\nmetric dened by a pairwise distance matrix. The denition of the metric on the\ndata is usually given as an input or guided by the application. It is however\nimportant to notice that the choice of the metric may be critical to reveal\ninteresting topological and geometric features of the data.\n",
"title": "An introduction to Topological Data Analysis: fundamental and practical aspects for data scientists"
} | null | null | [
"Computer Science",
"Mathematics",
"Statistics"
]
| null | true | null | 19446 | null | Validated | null | null |
null | {
"abstract": " The resemblance between the methods used in quantum-many body physics and in\nmachine learning has drawn considerable attention. In particular, tensor\nnetworks (TNs) and deep learning architectures bear striking similarities to\nthe extent that TNs can be used for machine learning. Previous results used\none-dimensional TNs in image recognition, showing limited scalability and\nflexibilities. In this work, we train two-dimensional hierarchical TNs to solve\nimage recognition problems, using a training algorithm derived from the\nmultipartite entanglement renormalization ansatz. This approach introduces\nnovel mathematical connections among quantum many-body physics, quantum\ninformation theory, and machine learning. While keeping the TN unitary in the\ntraining phase, TN states are defined, which optimally encode classes of images\ninto quantum many-body states. We study the quantum features of the TN states,\nincluding quantum entanglement and fidelity. We find these quantities could be\nnovel properties that characterize the image classes, as well as the machine\nlearning tasks. Our work could contribute to the research on\nidentifying/modeling quantum artificial intelligences.\n",
"title": "Machine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures"
} | null | null | null | null | true | null | 19447 | null | Default | null | null |
null | {
"abstract": " We study how to effectively leverage expert feedback to learn sequential\ndecision-making policies. We focus on problems with sparse rewards and long\ntime horizons, which typically pose significant challenges in reinforcement\nlearning. We propose an algorithmic framework, called hierarchical guidance,\nthat leverages the hierarchical structure of the underlying problem to\nintegrate different modes of expert interaction. Our framework can incorporate\ndifferent combinations of imitation learning (IL) and reinforcement learning\n(RL) at different levels, leading to dramatic reductions in both expert effort\nand cost of exploration. Using long-horizon benchmarks, including Montezuma's\nRevenge, we demonstrate that our approach can learn significantly faster than\nhierarchical RL, and be significantly more label-efficient than standard IL. We\nalso theoretically analyze labeling cost for certain instantiations of our\nframework.\n",
"title": "Hierarchical Imitation and Reinforcement Learning"
} | null | null | null | null | true | null | 19448 | null | Default | null | null |
null | {
"abstract": " We investigate the effects of social interactions in task al- location using\nEvolutionary Game Theory (EGT). We propose a simple task-allocation game and\nstudy how different learning mechanisms can give rise to specialised and non-\nspecialised colonies under different ecological conditions. By combining\nagent-based simulations and adaptive dynamics we show that social learning can\nresult in colonies of generalists or specialists, depending on ecological\nparameters. Agent-based simulations further show that learning dynamics play a\ncrucial role in task allocation. In particular, introspective individual\nlearning readily favours the emergence of specialists, while a process\nresembling task recruitment favours the emergence of generalists.\n",
"title": "Social learning in a simple task allocation game"
} | null | null | null | null | true | null | 19449 | null | Default | null | null |
null | {
"abstract": " A powerful data transformation method named guided projections is proposed\ncreating new possibilities to reveal the group structure of high-dimensional\ndata in the presence of noise variables. Utilising projections onto a space\nspanned by a selection of a small number of observations allows measuring the\nsimilarity of other observations to the selection based on orthogonal and score\ndistances. Observations are iteratively exchanged from the selection creating a\nnon-random sequence of projections which we call guided projections. In\ncontrast to conventional projection pursuit methods, which typically identify a\nlow-dimensional projection revealing some interesting features contained in the\ndata, guided projections generate a series of projections that serve as a basis\nnot just for diagnostic plots but to directly investigate the group structure\nin data. Based on simulated data we identify the strengths and limitations of\nguided projections in comparison to commonly employed data transformation\nmethods. We further show the relevance of the transformation by applying it to\nreal-world data sets.\n",
"title": "Guided projections for analysing the structure of high-dimensional data"
} | null | null | null | null | true | null | 19450 | null | Default | null | null |
null | {
"abstract": " Cascading failures are a critical vulnerability of complex information or\ninfrastructure networks. Here we investigate the properties of load-based\ncascading failures in real and synthetic spatially-embedded network structures,\nand propose mitigation strategies to reduce the severity of damages caused by\nsuch failures. We introduce a stochastic method for optimal heterogeneous\ndistribution of resources (node capacities) subject to a fixed total cost.\nAdditionally, we design and compare the performance of networks with N-stable\nand (N-1)-stable network-capacity allocations by triggering cascades using\nvarious real-world node-attack and node-failure scenarios. We show that failure\nmitigation through increased node protection can be effectively achieved\nagainst single node failures. However, mitigating against multiple node\nfailures is much more difficult due to the combinatorial increase in possible\nfailures. We analyze the robustness of the system with increasing protection,\nand find that a critical tolerance exists at which the system undergoes a phase\ntransition, and above which the network almost completely survives an attack.\nMoreover, we show that cascade-size distributions measured in this region\nexhibit a power-law decay. Finally, we find a strong correlation between\ncascade sizes induced by individual nodes and sets of nodes. We also show that\nnetwork topology alone is a weak factor in determining the progression of\ncascading failures.\n",
"title": "Limits of Predictability of Cascading Overload Failures in Spatially-Embedded Networks with Distributed Flows"
} | null | null | [
"Computer Science",
"Physics"
]
| null | true | null | 19451 | null | Validated | null | null |
null | {
"abstract": " Despite its extremely weak intrinsic spin-orbit coupling (SOC), graphene has\nbeen shown to acquire considerable SOC by proximity coupling with exfoliated\ntransition metal dichalcogenides (TMDs). Here we demonstrate strong induced\nRashba SOC in graphene that is proximity coupled to a monolayer TMD film, MoS2\nor WSe2, grown by chemical vapor deposition with drastically different Fermi\nlevel positions. Graphene/TMD heterostructures are fabricated with a\npickup-transfer technique utilizing hexagonal boron nitride, which serves as a\nflat template to promote intimate contact and therefore a strong interfacial\ninteraction between TMD and graphene as evidenced by quenching of the TMD\nphotoluminescence. We observe strong induced graphene SOC that manifests itself\nin a pronounced weak anti-localization (WAL) effect in the graphene\nmagnetoconductance. The spin relaxation rate extracted from the WAL analysis\nvaries linearly with the momentum scattering time and is independent of the\ncarrier type. This indicates a dominantly Dyakonov-Perel spin relaxation\nmechanism caused by the induced Rashba SOC. Our analysis yields a Rashba SOC\nenergy of ~1.5 meV in graphene/WSe2 and ~0.9 meV in graphene/MoS2,\nrespectively. The nearly electron-hole symmetric nature of the induced Rashba\nSOC provides a clue to possible underlying SOC mechanisms.\n",
"title": "Strong electron-hole symmetric Rashba spin-orbit coupling in graphene/monolayer transition metal dichalcogenide heterostructures"
} | null | null | [
"Physics"
]
| null | true | null | 19452 | null | Validated | null | null |
null | {
"abstract": " We present a blind multiframe image-deconvolution method based on robust\nstatistics. The usual shortcomings of iterative optimization of the likelihood\nfunction are alleviated by minimizing the M-scale of the residuals, which\nachieves more uniform convergence across the image. We focus on the\ndeconvolution of astronomical images, which are among the most challenging due\nto their huge dynamic ranges and the frequent presence of large noise-dominated\nregions in the images. We show that high-quality image reconstruction is\npossible even in super-resolution and without the use of traditional\nregularization terms. Using a robust \\r{ho}-function is straightforward to\nimplement in a streaming setting and, hence our method is applicable to the\nlarge volumes of astronomy images. The power of our method is demonstrated on\nobservations from the Sloan Digital Sky Survey (Stripe 82) and we briefly\ndiscuss the feasibility of a pipeline based on Graphical Processing Units for\nthe next generation of telescope surveys.\n",
"title": "Robust Statistics for Image Deconvolution"
} | null | null | [
"Computer Science",
"Physics"
]
| null | true | null | 19453 | null | Validated | null | null |
null | {
"abstract": " We consider the framework of aggregative games, in which the cost function of\neach agent depends on his own strategy and on the average population strategy.\nAs first contribution, we investigate the relations between the concepts of\nNash and Wardrop equilibrium. By exploiting a characterization of the two\nequilibria as solutions of variational inequalities, we bound their distance\nwith a decreasing function of the population size. As second contribution, we\npropose two decentralized algorithms that converge to such equilibria and are\ncapable of coping with constraints coupling the strategies of different agents.\nFinally, we study the applications of charging of electric vehicles and of\nroute choice on a road network.\n",
"title": "Nash and Wardrop equilibria in aggregative games with coupling constraints"
} | null | null | null | null | true | null | 19454 | null | Default | null | null |
null | {
"abstract": " Condensed matter systems that simultaneously exhibit superconductivity and\nferromagnetism are rare due the antagonistic relationship between conventional\nspin-singlet superconductivity and ferromagnetic order. In materials in which\nsuperconductivity and magnetic order is known to coexist (such as some\nheavy-fermion materials), the superconductivity is thought to be of an\nunconventional nature. Recently, the conducting gas that lives at the interface\nbetween the perovskite band insulators LaAlO$_3$ (LAO) and SrTiO$_3$ (STO) has\nalso been shown to host both superconductivity and magnetism. Most previous\nresearch has focused on LAO/STO samples in which the interface is in the (001)\ncrystal plane. Relatively little work has focused on the (111) crystal\norientation, which has hexagonal symmetry at the interface, and has been\npredicted to have potentially interesting topological properties, including\nunconventional superconducting pairing states. Here we report measurements of\nthe magnetoresistance of (111) LAO/STO heterostructures at temperatures at\nwhich they are also superconducting. As with the (001) structures, the\nmagnetoresistance is hysteretic, indicating the coexistence of magnetism and\nsuperconductivity, but in addition, we find that this magnetoresistance is\nanisotropic. Such an anisotropic response is completely unexpected in the\nsuperconducting state, and suggests that (111) LAO/STO heterostructures may\nsupport unconventional superconductivity.\n",
"title": "Magnetoresistance in the superconducting state at the (111) LaAlO$_3$/SrTiO$_3$ interface"
} | null | null | null | null | true | null | 19455 | null | Default | null | null |
null | {
"abstract": " The paper presents the application of Variational Autoencoders (VAE) for data\ndimensionality reduction and explorative analysis of mass spectrometry imaging\ndata (MSI). The results confirm that VAEs are capable of detecting the patterns\nassociated with the different tissue sub-types with performance than standard\napproaches.\n",
"title": "Variational autoencoders for tissue heterogeneity exploration from (almost) no preprocessed mass spectrometry imaging data"
} | null | null | null | null | true | null | 19456 | null | Default | null | null |
null | {
"abstract": " In this paper, we propose a new algorithm for learning general\nlatent-variable probabilistic graphical models using the techniques of\npredictive state representation, instrumental variable regression, and\nreproducing-kernel Hilbert space embeddings of distributions. Under this new\nlearning framework, we first convert latent-variable graphical models into\ncorresponding latent-variable junction trees, and then reduce the hard\nparameter learning problem into a pipeline of supervised learning problems,\nwhose results will then be used to perform predictive belief propagation over\nthe latent junction tree during the actual inference procedure. We then give\nproofs of our algorithm's correctness, and demonstrate its good performance in\nexperiments on one synthetic dataset and two real-world tasks from\ncomputational biology and computer vision - classifying DNA splice junctions\nand recognizing human actions in videos.\n",
"title": "Learning General Latent-Variable Graphical Models with Predictive Belief Propagation and Hilbert Space Embeddings"
} | null | null | null | null | true | null | 19457 | null | Default | null | null |
null | {
"abstract": " The answer is Yes! We indeed find that interacting dark energy can alleviate\nthe current tension on the value of the Hubble constant $H_0$ between the\nCosmic Microwave Background anisotropies constraints obtained from the Planck\nsatellite and the recent direct measurements reported by Riess et al. 2016. The\ncombination of these two datasets points towards an evidence for a non-zero\ndark matter-dark energy coupling $\\xi$ at more than two standard deviations,\nwith $\\xi=-0.26_{-0.12}^{+0.16}$ at $95\\%$ CL. However the $H_0$ tension is\nbetter solved when the equation of state of the interacting dark energy\ncomponent is allowed to freely vary, with a phantom-like equation of state\n$w=-1.184\\pm0.064$ (at $68 \\%$ CL), ruling out the pure cosmological constant\ncase, $w=-1$, again at more than two standard deviations. When Planck data are\ncombined with external datasets, as BAO, JLA Supernovae Ia luminosity\ndistances, cosmic shear or lensing data, we find good consistency with the\ncosmological constant scenario and no compelling evidence for a dark\nmatter-dark energy coupling.\n",
"title": "Can interacting dark energy solve the $H_0$ tension?"
} | null | null | null | null | true | null | 19458 | null | Default | null | null |
null | {
"abstract": " Dual Fabry-Perot cavity based optical refractometry (DFPC-OR) has a high\npotential for assessments of gas density. However, drifts of the FP cavity\noften limit its performance. We show that by the use of two narrow-linewidth\nfiber lasers locked to two high finesse cavities and Allan-Werle plots that\ndrift-free DFPC-OR can be obtained for short measurement times (for which the\ndrifts of the cavity can be disregarded). Based on this, a novel strategy,\ntermed fast switching DFPC-OR (FS-DFPC-OR), is presented. A set of novel\nmethodologies for assessment of both gas density and flow rates (in particular\nfrom small leaks) that are not restricted by the conventional limitations\nimposed by the drifts of the cavity are presented. The methodologies deal with\nassessments in both open and closed (finite-sized) compartments. They\ncircumvent the problem with volumetric expansion, i.e. that the gas density in\na measurement cavity is not the same as that in the closed external compartment\nthat should be assessed, by performing a pair of measurements in rapid\nsuccession; the first one serves the purpose of assessing the density of the\ngas that has been transferred into the measurement cavity by the gas\nequilibration process, while the 2nd is used to automatically calibrate the\nsystem with respect to the relative volumes of the measurement cavity and the\nexternal compartment. The methodologies for assessments of leak rates comprise\ntriple cavity evacuation assessments, comprising two measurements performed in\nrapid succession, supplemented by a 3rd measurement a certain time thereafter.\nA clear explanation of why the technique has such a small temperature\ndependence is given. It is concluded that FS-DFPC-OR constitutes a novel\nstrategy that can be used for precise and accurate assessment of gas number\ndensity and gas flows under a variety of conditions, in particular\nnon-temperature stabilized ones.\n",
"title": "Fast Switching Dual Fabry-Perot Cavity Optical Refractometry - Methodologies for Accurate Assessment of Gas Density"
} | null | null | null | null | true | null | 19459 | null | Default | null | null |
null | {
"abstract": " We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for\nsarcasm research and for training and evaluating systems for sarcasm detection.\nThe corpus has 1.3 million sarcastic statements -- 10 times more than any\nprevious dataset -- and many times more instances of non-sarcastic statements,\nallowing for learning in both balanced and unbalanced label regimes. Each\nstatement is furthermore self-annotated -- sarcasm is labeled by the author,\nnot an independent annotator -- and provided with user, topic, and conversation\ncontext. We evaluate the corpus for accuracy, construct benchmarks for sarcasm\ndetection, and evaluate baseline methods.\n",
"title": "A Large Self-Annotated Corpus for Sarcasm"
} | null | null | null | null | true | null | 19460 | null | Default | null | null |
null | {
"abstract": " We use a model of aerosol microphysics to investigate the impact of\nhigh-altitude photochemical aerosols on the transmission spectra and\natmospheric properties of close-in exoplanets, such as HD209458b and HD189733b.\nThe results depend strongly on the temperature profiles in the middle and upper\natmosphere that are poorly understood. Nevertheless, our model of HD189733b,\nbased on the most recently inferred temperature profiles, produces an aerosol\ndistribution that matches the observed transmission spectrum. We argue that the\nhotter temperature of HD209458b inhibits the production of high-altitude\naerosols and leads to the appearance of a more clear atmosphere than on\nHD189733b. The aerosol distribution also depends on the particle composition,\nthe photochemical production, and the atmospheric mixing. Due to degeneracies\namong these inputs, current data cannot constrain the aerosol properties in\ndetail. Instead, our work highlights the role of different factors in\ncontrolling the aerosol distribution that will prove useful in understanding\ndifferent observations, including those from future missions. For the\natmospheric mixing efficiency suggested by general circulation models (GCMs) we\nfind that aerosol particles are small ($\\sim$nm) and probably spherical. We\nfurther conclude that composition based on complex hydrocarbons (soots) is the\nmost likely candidate to survive the high temperatures in hot Jupiter\natmospheres. Such particles would have a significant impact on the energy\nbalance of HD189733b's atmosphere and should be incorporated in future studies\nof atmospheric structure. We also evaluate the contribution of external sources\nin the photochemical aerosol formation and find that their spectral signature\nis not consistent with observations.\n",
"title": "Aerosol properties in the atmospheres of extrasolar giant planets"
} | null | null | null | null | true | null | 19461 | null | Default | null | null |
null | {
"abstract": " Opinion formation in the population has attracted extensive research\ninterest. Various models have been introduced and studied, including the ones\nwith individuals' free will allowing them to change their opinions. Such\nmodels, however, have not taken into account the fact that individuals with\ndifferent opinions may have different levels of loyalty, and consequently,\ndifferent probabilities of changing their opinions. In this work, we study on\nhow the non-uniform distribution of the opinion changing probability may affect\nthe final state of opinion distribution. By simulating a few different cases\nwith different symmetric and asymmetric non-uniform patterns of opinion\nchanging probabilities, we demonstrate the significant effects that the\ndifferent loyalty levels of different opinions have on the final state of the\nopinion distribution.\n",
"title": "Influence of random opinion change in complex networks"
} | null | null | null | null | true | null | 19462 | null | Default | null | null |
null | {
"abstract": " Given data over variables $(X_1,...,X_m, Y)$ we consider the problem of\nfinding out whether $X$ jointly causes $Y$ or whether they are all confounded\nby an unobserved latent variable $Z$. To do so, we take an\ninformation-theoretic approach based on Kolmogorov complexity. In a nutshell,\nwe follow the postulate that first encoding the true cause, and then the\neffects given that cause, results in a shorter description than any other\nencoding of the observed variables.\nThe ideal score is not computable, and hence we have to approximate it. We\npropose to do so using the Minimum Description Length (MDL) principle. We\ncompare the MDL scores under the models where $X$ causes $Y$ and where there\nexists a latent variables $Z$ confounding both $X$ and $Y$ and show our scores\nare consistent. To find potential confounders we propose using latent factor\nmodeling, in particular, probabilistic PCA (PPCA).\nEmpirical evaluation on both synthetic and real-world data shows that our\nmethod, CoCa, performs very well -- even when the true generating process of\nthe data is far from the assumptions made by the models we use. Moreover, it is\nrobust as its accuracy goes hand in hand with its confidence.\n",
"title": "We Are Not Your Real Parents: Telling Causal from Confounded using MDL"
} | null | null | null | null | true | null | 19463 | null | Default | null | null |
null | {
"abstract": " We study the evolution of the eccentricity and inclination of protoplanetary\nembryos and low-mass protoplanets (from a fraction of an Earth mass to a few\nEarth masses) embedded in a protoplanetary disc, by means of three dimensional\nhydrodynamics calculations with radiative transfer in the diffusion limit. When\nthe protoplanets radiate in the surrounding disc the energy released by the\naccretion of solids, their eccentricity and inclination experience a growth\ntoward values which depend on the luminosity to mass ratio of the planet, which\nare comparable to the disc's aspect ratio and which are reached over timescales\nof a few thousand years. This growth is triggered by the appearance of a hot,\nunder-dense region in the vicinity of the planet. The growth rate of the\neccentricity is typically three times larger than that of the inclination. In\nlong term calculations, we find that the excitation of eccentricity and the\nexcitation of inclination are not independent. In the particular case in which\na planet has initially a very small eccentricity and inclination, the\neccentricity largely overruns the inclination. When the eccentricity reaches\nits asymptotic value, the growth of inclination is quenched, yielding an\neccentric orbit with a very low inclination. As a side result, we find that the\neccentricity and inclination of non-luminous planets are damped more vigorously\nin radiative discs than in isothermal discs.\n",
"title": "Evolution of eccentricity and inclination of hot protoplanets embedded in radiative discs"
} | null | null | null | null | true | null | 19464 | null | Default | null | null |
null | {
"abstract": " We discuss various forms of definitions in mathematics and describe rules\ngoverning them.\n",
"title": "Definitions in mathematics"
} | null | null | null | null | true | null | 19465 | null | Default | null | null |
null | {
"abstract": " Fix any field $K$ of characteristic $p$ such that $[K:K^p]$ is finite. We\ndiscuss excellence for Noetherian domains whose fraction field is $K$, showing\nfor example, that $R$ is excellent if and only if the Frobenius map is finite\non $R$. Furthermore, we show $R$ is excellent if and only if it admits some\nnon-zero $p^{-e}$-linear map for $R$ or equivalently, that $R$ is a solid\n$R$-algebra under Frobenius. In particular, this means that Frobenius split\nNoetherian domains that are generically $F$-finite are always excellent. We\nalso show that non-excellent rings are abundant and easy to construct in prime\ncharacteristic, even within the world of regular local rings of dimension one\nin function fields. This paper is mostly expository in nature.\n",
"title": "Excellence in prime characteristic"
} | null | null | null | null | true | null | 19466 | null | Default | null | null |
null | {
"abstract": " We consider the minimax estimation problem of a discrete distribution with\nsupport size $k$ under locally differential privacy constraints. A\nprivatization scheme is applied to each raw sample independently, and we need\nto estimate the distribution of the raw samples from the privatized samples. A\npositive number $\\epsilon$ measures the privacy level of a privatization\nscheme.\nIn our previous work (arXiv:1702.00610), we proposed a family of new\nprivatization schemes and the corresponding estimator. We also proved that our\nscheme and estimator are order optimal in the regime $e^{\\epsilon} \\ll k$ under\nboth $\\ell_2^2$ and $\\ell_1$ loss. In other words, for a large number of\nsamples the worst-case estimation loss of our scheme was shown to differ from\nthe optimal value by at most a constant factor. In this paper, we eliminate\nthis gap by showing asymptotic optimality of the proposed scheme and estimator\nunder the $\\ell_2^2$ (mean square) loss. More precisely, we show that for any\n$k$ and $\\epsilon,$ the ratio between the worst-case estimation loss of our\nscheme and the optimal value approaches $1$ as the number of samples tends to\ninfinity.\n",
"title": "Asymptotically optimal private estimation under mean square loss"
} | null | null | null | null | true | null | 19467 | null | Default | null | null |
null | {
"abstract": " The effect of the Coulomb repulsion of holes on the Cooper instability in an\nensemble of spin-polaron quasiparticles has been analyzed, taking into account\nthe peculiarities of the crystallographic structure of the CuO$_2$ plane, which\nare associated with the presence of two oxygen ions and one copper ion in the\nunit cell, as well as the strong spin-fermion coupling. The investigation of\nthe possibility of implementation superconducting phases with d-wave and s-wave\npairing of the order parameter symmetry has shown that in the entire doping\nregion only the d-wave pairing satisfies the self-consistency equations, while\nthere is no solution for the s-wave pairing. This result completely corresponds\nto the experimental data on cuprate HTSC. It has been demonstrated analytically\nthat the intersite Coulomb interaction does not affect the superconducting\nd-wave pairing, because its Fourier transform $V_q$ does not appear in the\nkernel of the corresponding integral equation.\n",
"title": "Coulomb repulsion of holes and competition between d_{x^2-y^2}-wave and s-wave parings in cuprate superconductors"
} | null | null | null | null | true | null | 19468 | null | Default | null | null |
null | {
"abstract": " Multiplayer Online Battle Arena (MOBA) games have received increasing\nworldwide popularity recently. In such games, players compete in teams against\neach other by controlling selected game avatars, each of which is designed with\ndifferent strengths and weaknesses. Intuitively, putting together game avatars\nthat complement each other (synergy) and suppress those of opponents\n(opposition) would result in a stronger team. In-depth understanding of synergy\nand opposition relationships among game avatars benefits player in making\ndecisions in game avatar drafting and gaining better prediction of match\nevents. However, due to intricate design and complex interactions between game\navatars, thorough understanding of their relationships is not a trivial task.\nIn this paper, we propose a latent variable model, namely Game Avatar\nEmbedding (GAE), to learn avatars' numerical representations which encode\nsynergy and opposition relationships between pairs of avatars. The merits of\nour model are twofold: (1) the captured synergy and opposition relationships\nare sensible to experienced human players' perception; (2) the learned\nnumerical representations of game avatars allow many important downstream\ntasks, such as similar avatar search, match outcome prediction, and avatar pick\nrecommender. To our best knowledge, no previous model is able to simultaneously\nsupport both features. Our quantitative and qualitative evaluations on real\nmatch data from three commercial MOBA games illustrate the benefits of our\nmodel.\n",
"title": "Modeling Game Avatar Synergy and Opposition through Embedding in Multiplayer Online Battle Arena Games"
} | null | null | null | null | true | null | 19469 | null | Default | null | null |
null | {
"abstract": " Tendon-driven hand orthoses have advantages over exoskeletons with respect to\nwearability and safety because of their low-profile design and ability to fit a\nrange of patients without requiring custom joint alignment. However, no\nexisting study on a wearable tendon-driven hand orthosis for stroke patients\npresents evidence that such devices can overcome spasticity given repeated use\nand fatigue, or discusses transmission efficiency. In this study, we propose\ntwo designs that provide effective force transmission by increasing moment arms\naround finger joints. We evaluate the designs with geometric models and\nexperiment using a 3D-printed artificial finger to find force and joint angle\ncharacteristics of the suggested structures. We also perform clinical tests\nwith stroke patients to demonstrate the feasibility of the designs. The testing\nsupports the hypothesis that the proposed designs efficiently elicit extension\nof the digits in patients with spasticity as compared to existing baselines.\n",
"title": "Design and Development of Effective Transmission Mechanisms on a Tendon Driven Hand Orthosis for Stroke Patients"
} | null | null | null | null | true | null | 19470 | null | Default | null | null |
null | {
"abstract": " For years security machine learning research has promised to obviate the need\nfor signature based detection by automatically learning to detect indicators of\nattack. Unfortunately, this vision hasn't come to fruition: in fact, developing\nand maintaining today's security machine learning systems can require\nengineering resources that are comparable to that of signature-based detection\nsystems, due in part to the need to develop and continuously tune the\n\"features\" these machine learning systems look at as attacks evolve. Deep\nlearning, a subfield of machine learning, promises to change this by operating\non raw input signals and automating the process of feature design and\nextraction. In this paper we propose the eXpose neural network, which uses a\ndeep learning approach we have developed to take generic, raw short character\nstrings as input (a common case for security inputs, which include artifacts\nlike potentially malicious URLs, file paths, named pipes, named mutexes, and\nregistry keys), and learns to simultaneously extract features and classify\nusing character-level embeddings and convolutional neural network. In addition\nto completely automating the feature design and extraction process, eXpose\noutperforms manual feature extraction based baselines on all of the intrusion\ndetection problems we tested it on, yielding a 5%-10% detection rate gain at\n0.1% false positive rate compared to these baselines.\n",
"title": "eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys"
} | null | null | null | null | true | null | 19471 | null | Default | null | null |
null | {
"abstract": " Software reusability has become much interesting because of increased quality\nand reduce cost. A good process of software reuse leads to enhance the\nreliability, productivity, quality and the reduction of time and cost. Current\nreuse techniques focuses on the reuse of software artifact which grounded on\nanticipated functionality whereas, the non-functional (quality) aspect are also\nimportant. So, Software reusability used here to expand quality and\nproductivity of software. It improves overall quality of software in minimum\nenergy and time. Main objective of this study was to present a reuse approach\nthat discovered that how software reuse improves the quality in Software\nIndustry. The V&V technique used for this purpose which is part of software\nquality management process, it checks the quality and correctness during the\nsoftware life cycle. A survey study conducted as QUESTIONAIR to find the impact\nof reuse approach on quality attributes which are requirement specification and\ndesign specification. Other quality enhancement techniques like ad hoc, CBSE,\nMBSE, Product line, COTS reuse checked on existing software industry. Results\nanalyzed with the help of MATLAB tool as it provides effective data management,\nwide range of options, better output organization, to check weather quality\nenhancement technique is affected due to reusability and how quality will\nimprove.\n",
"title": "A Software Reuse Approach and Its Effect On Software Quality, An Empirical Study for The Software Industry"
} | null | null | null | null | true | null | 19472 | null | Default | null | null |
null | {
"abstract": " We consider classical Merton problem of terminal wealth maximization in\nfinite horizon. We assume that the drift of the stock is following\nOrnstein-Uhlenbeck process and the volatility of it is following GARCH(1)\nprocess. In particular, both mean and volatility are unbounded. We assume that\nthere is Knightian uncertainty on the parameters of both mean and volatility.\nWe take that the investor has logarithmic utility function, and solve the\ncorresponding utility maximization problem explicitly. To the best of our\nknowledge, this is the first work on utility maximization with unbounded mean\nand volatility in Knightian uncertainty under nondominated priors.\n",
"title": "Portfolio Optimization with Nondominated Priors and Unbounded Parameters"
} | null | null | [
"Quantitative Finance"
]
| null | true | null | 19473 | null | Validated | null | null |
null | {
"abstract": " The lack of efficiency in urban diffusion is a debated issue, important for\nbiologists, urban specialists, planners and statisticians, both in developed\nand new developing countries. Many approaches have been considered to measure\nurban sprawl, i.e. chaotic urban expansion; such idea of chaos is here linked\nto the concept of entropy. Entropy, firstly introduced in information theory,\nrapidly became a standard tool in ecology, biology and geography to measure the\ndegree of heterogeneity among observations; in these contexts, entropy measures\nshould include spatial information. The aim of this paper is to employ a\nrigorous spatial entropy based approach to measure urban sprawl associated to\nthe diffusion of metropolitan cities. In order to assess the performance of the\nconsidered measures, a comparative study is run over alternative urban\nscenarios; afterwards, measures are used to quantify the degree of disorder in\nthe urban expansion of three cities in Europe. Results are easily interpretable\nand can be used both as an absolute measure of urban sprawl and for comparison\nover space and time.\n",
"title": "Measuring heterogeneity in urban expansion via spatial entropy"
} | null | null | null | null | true | null | 19474 | null | Default | null | null |
null | {
"abstract": " We derive a new Bayesian Information Criterion (BIC) by formulating the\nproblem of estimating the number of clusters in an observed data set as\nmaximization of the posterior probability of the candidate models. Given that\nsome mild assumptions are satisfied, we provide a general BIC expression for a\nbroad class of data distributions. This serves as a starting point when\nderiving the BIC for specific distributions. Along this line, we provide a\nclosed-form BIC expression for multivariate Gaussian distributed variables. We\nshow that incorporating the data structure of the clustering problem into the\nderivation of the BIC results in an expression whose penalty term is different\nfrom that of the original BIC. We propose a two-step cluster enumeration\nalgorithm. First, a model-based unsupervised learning algorithm partitions the\ndata according to a given set of candidate models. Subsequently, the number of\nclusters is determined as the one associated with the model for which the\nproposed BIC is maximal. The performance of the proposed two-step algorithm is\ntested using synthetic and real data sets.\n",
"title": "Bayesian Cluster Enumeration Criterion for Unsupervised Learning"
} | null | null | null | null | true | null | 19475 | null | Default | null | null |
null | {
"abstract": " Combinatorial interaction testing is an important software testing technique\nthat has seen lots of recent interest. It can reduce the number of test cases\nneeded by considering interactions between combinations of input parameters.\nEmpirical evidence shows that it effectively detects faults, in particular, for\nhighly configurable software systems. In real-world software testing, the input\nvariables may vary in how strongly they interact, variable strength\ncombinatorial interaction testing (VS-CIT) can exploit this for higher\neffectiveness. The generation of variable strength test suites is a\nnon-deterministic polynomial-time (NP) hard computational problem\n\\cite{BestounKamalFuzzy2017}. Research has shown that stochastic\npopulation-based algorithms such as particle swarm optimization (PSO) can be\nefficient compared to alternatives for VS-CIT problems. Nevertheless, they\nrequire detailed control for the exploitation and exploration trade-off to\navoid premature convergence (i.e. being trapped in local optima) as well as to\nenhance the solution diversity. Here, we present a new variant of PSO based on\nMamdani fuzzy inference system\n\\cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive\nselection of its global and local search operations. We detail the design of\nthis combined algorithm and evaluate it through experiments on multiple\nsynthetic and benchmark problems. We conclude that fuzzy adaptive selection of\nglobal and local search operations is, at least, feasible as it performs only\nsecond-best to a discrete variant of PSO, called DPSO. Concerning obtaining the\nbest mean test suite size, the fuzzy adaptation even outperforms DPSO\noccasionally. We discuss the reasons behind this performance and outline\nrelevant areas of future work.\n",
"title": "Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation"
} | null | null | null | null | true | null | 19476 | null | Default | null | null |
null | {
"abstract": " We consider the habitability of Earth-analogs around stars of different\nmasses, which is regulated by the stellar lifetime, stellar wind-induced\natmospheric erosion, and biologically active ultraviolet (UV) irradiance. By\nestimating the timescales associated with each of these processes, we show that\nthey collectively impose limits on the habitability of Earth-analogs. We\nconclude that planets orbiting most M-dwarfs are not likely to host life, and\nthat the highest probability of complex biospheres is for planets around K- and\nG-type stars. Our analysis suggests that the current existence of life near the\nSun is slightly unusual, but not significantly anomalous.\n",
"title": "Is Life Most Likely Around Sun-like Stars?"
} | null | null | null | null | true | null | 19477 | null | Default | null | null |
null | {
"abstract": " Unsupervised learning is about capturing dependencies between variables and\nis driven by the contrast between the probable vs. improbable configurations of\nthese variables, often either via a generative model that only samples probable\nones or with an energy function (unnormalized log-density) that is low for\nprobable ones and high for improbable ones. Here, we consider learning both an\nenergy function and an efficient approximate sampling mechanism. Whereas the\ndiscriminator in generative adversarial networks (GANs) learns to separate data\nand generator samples, introducing an entropy maximization regularizer on the\ngenerator can turn the interpretation of the critic into an energy function,\nwhich separates the training distribution from everything else, and thus can be\nused for tasks like anomaly or novelty detection. Then, we show how Markov\nChain Monte Carlo can be done in the generator latent space whose samples can\nbe mapped to data space, producing better samples. These samples are used for\nthe negative phase gradient required to estimate the log-likelihood gradient of\nthe data space energy function. To maximize entropy at the output of the\ngenerator, we take advantage of recently introduced neural estimators of mutual\ninformation. We find that in addition to producing a useful scoring function\nfor anomaly detection, the resulting approach produces sharp samples while\ncovering the modes well, leading to high Inception and Frechet scores.\n",
"title": "Maximum Entropy Generators for Energy-Based Models"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 19478 | null | Validated | null | null |
null | {
"abstract": " Convolutional neural networks (CNNs) have shown promising results on several\nsegmentation tasks in magnetic resonance (MR) images. However, the accuracy of\nCNNs may degrade severely when segmenting images acquired with different\nscanners and/or protocols as compared to the training data, thus limiting their\npractical utility. We address this shortcoming in a lifelong multi-domain\nlearning setting by treating images acquired with different scanners or\nprotocols as samples from different, but related domains. Our solution is a\nsingle CNN with shared convolutional filters and domain-specific batch\nnormalization layers, which can be tuned to new domains with only a few\n($\\approx$ 4) labelled images. Importantly, this is achieved while retaining\nperformance on the older domains whose training data may no longer be\navailable. We evaluate the method for brain structure segmentation in MR\nimages. Results demonstrate that the proposed method largely closes the gap to\nthe benchmark, which is training a dedicated CNN for each scanner.\n",
"title": "A Lifelong Learning Approach to Brain MR Segmentation Across Scanners and Protocols"
} | null | null | null | null | true | null | 19479 | null | Default | null | null |
null | {
"abstract": " In this paper, we address the rigid body pose stabilization problem using\ndual quaternion formalism. We propose a hybrid control strategy to design a\nswitching control law with hysteresis in such a way that the global asymptotic\nstability of the closed-loop system is guaranteed and such that the global\nattractivity of the stabilization pose does not exhibit chattering, a problem\nthat is present in all discontinuous-based feedback controllers. Using\nnumerical simulations, we illustrate the problems that arise from existing\nresults in the literature -- as unwinding and chattering -- and verify the\neffectiveness of the proposed controller to solve the robust global pose\nstability problem.\n",
"title": "Hybrid Kinematic Control for Rigid Body Pose Stabilization using Dual Quaternions"
} | null | null | null | null | true | null | 19480 | null | Default | null | null |
null | {
"abstract": " In this paper we propose definitions and examples of categorical enhancements\nof the data involved in the $2d$-$4d$ wall-crossing formulas which generalize\nboth Cecotti-Vafa and Kontsevich-Soibelman motivic wall-crossing formulas.\n",
"title": "On 2d-4d motivic wall-crossing formulas"
} | null | null | [
"Mathematics"
]
| null | true | null | 19481 | null | Validated | null | null |
null | {
"abstract": " We study the relationship between performance and practice by analyzing the\nactivity of many players of a casual online game. We find significant\nheterogeneity in the improvement of player performance, given by score, and\naddress this by dividing players into similar skill levels and segmenting each\nplayer's activity into sessions, i.e., sequence of game rounds without an\nextended break. After disaggregating data, we find that performance improves\nwith practice across all skill levels. More interestingly, players are more\nlikely to end their session after an especially large improvement, leading to a\npeak score in their very last game of a session. In addition, success is\nstrongly correlated with a lower quitting rate when the score drops, and only\nweakly correlated with skill, in line with psychological findings about the\nvalue of persistence and \"grit\": successful players are those who persist in\ntheir practice despite lower scores. Finally, we train an epsilon-machine, a\ntype of hidden Markov model, and find a plausible mechanism of game play that\ncan predict player performance and quitting the game. Our work raises the\npossibility of real-time assessment and behavior prediction that can be used to\noptimize human performance.\n",
"title": "On Quitting: Performance and Practice in Online Game Play"
} | null | null | null | null | true | null | 19482 | null | Default | null | null |
null | {
"abstract": " Beyond traditional security methods, unmanned aerial vehicles (UAVs) have\nbecome an important surveillance tool used in security domains to collect the\nrequired annotated data. However, collecting annotated data from videos taken\nby UAVs efficiently, and using these data to build datasets that can be used\nfor learning payoffs or adversary behaviors in game-theoretic approaches and\nsecurity applications, is an under-explored research question. This paper\npresents VIOLA, a novel labeling application that includes (i) a workload\ndistribution framework to efficiently gather human labels from videos in a\nsecured manner; (ii) a software interface with features designed for labeling\nvideos taken by UAVs in the domain of wildlife security. We also present the\nevolution of VIOLA and analyze how the changes made in the development process\nrelate to the efficiency of labeling, including when seemingly obvious\nimprovements did not lead to increased efficiency. VIOLA enables collecting\nmassive amounts of data with detailed information from challenging security\nvideos such as those collected aboard UAVs for wildlife security. VIOLA will\nlead to the development of new approaches that integrate deep learning for\nreal-time detection and response.\n",
"title": "Video Labeling for Automatic Video Surveillance in Security Domains"
} | null | null | null | null | true | null | 19483 | null | Default | null | null |
null | {
"abstract": " In this paper, we extend two classical results about the density of subgraphs\nof hypercubes to subgraphs $G$ of Cartesian products $G_1\\times\\cdots\\times\nG_m$ of arbitrary connected graphs. Namely, we show that\n$\\frac{|E(G)|}{|V(G)|}\\le \\lceil 2\\max\\{\n\\text{dens}(G_1),\\ldots,\\text{dens}(G_m)\\} \\rceil\\log|V(G)|$, where\n$\\text{dens}(H)$ is the maximum ratio $\\frac{|E(H')|}{|V(H')|}$ over all\nsubgraphs $H'$ of $H$. We introduce the notions of VC-dimension\n$\\text{VC-dim}(G)$ and VC-density $\\text{VC-dens}(G)$ of a subgraph $G$ of a\nCartesian product $G_1\\times\\cdots\\times G_m$, generalizing the classical\nVapnik-Chervonenkis dimension of set-families (viewed as subgraphs of\nhypercubes). We prove that if $G_1,\\ldots,G_m$ belong to the class ${\\mathcal\nG}(H)$ of all finite connected graphs not containing a given graph $H$ as a\nminor, then for any subgraph $G$ of $G_1\\times\\cdots\\times G_m$ a sharper\ninequality $\\frac{|E(G)|}{|V(G)|}\\le \\text{VC-dim}(G)\\alpha(H)$ holds, where\n$\\alpha(H)$ is the density of the graphs from ${\\mathcal G}(H)$. We refine and\nsharpen those two results to several specific graph classes. We also derive\nupper bounds (some of them polylogarithmic) for the size of adjacency labeling\nschemes of subgraphs of Cartesian products.\n",
"title": "On density of subgraphs of Cartesian products"
} | null | null | null | null | true | null | 19484 | null | Default | null | null |
null | {
"abstract": " In this paper, we use replica analysis to determine the investment strategy\nthat can maximize the net present value for portfolios containing multiple\ndevelopment projects. Replica analysis was developed in statistical mechanical\ninformatics and econophysics to evaluate disordered systems, and here we use it\nto formulate the maximization of the net present value as an optimization\nproblem under budget and investment concentration constraints. Furthermore, we\nconfirm that a common approach from operations research underestimates the true\nmaximal net present value as the maximal expected net present value by\ncomparing our results with the maximal expected net present value as derived in\noperations research. Moreover, it is shown that the conventional method for\nestimating the net present value does not consider variance in the cash flow.\n",
"title": "Replica Analysis for Maximization of Net Present Value"
} | null | null | null | null | true | null | 19485 | null | Default | null | null |
null | {
"abstract": " We discuss the local properties of weak solutions to the equation $-\\Delta u\n+ b\\cdot\\nabla u=0$. The corresponding theory is well-known in the case $b\\in\nL_n$, where $n$ is the dimension of the space. Our main interest is focused on\nthe case $b\\in L_2$. In this case the structure assumption $\\operatorname{div}\nb=0$ turns out to be crucial.\n",
"title": "On some properties of weak solutions to elliptic equations with divergence-free drifts"
} | null | null | null | null | true | null | 19486 | null | Default | null | null |
null | {
"abstract": " The real vector space of non-oriented graphs is known to carry a differential\ngraded Lie algebra structure. Cocycles in the Kontsevich graph complex,\nexpressed using formal sums of graphs on $n$ vertices and $2n-2$ edges, induce\n-- under the orientation mapping -- infinitesimal symmetries of classical\nPoisson structures on arbitrary finite-dimensional affine real manifolds.\nWillwacher has stated the existence of a nontrivial cocycle that contains the\n$(2\\ell+1)$-wheel graph with a nonzero coefficient at every\n$\\ell\\in\\mathbb{N}$. We present detailed calculations of the differential of\ngraphs; for the tetrahedron and pentagon-wheel cocycles, consisting at $\\ell =\n1$ and $\\ell = 2$ of one and two graphs respectively, the cocycle condition\n$d(\\gamma) = 0$ is verified by hand. For the next, heptagon-wheel cocycle\n(known to exist at $\\ell = 3$), we provide an explicit representative: it\nconsists of 46 graphs on 8 vertices and 14 edges.\n",
"title": "The heptagon-wheel cocycle in the Kontsevich graph complex"
} | null | null | [
"Mathematics"
]
| null | true | null | 19487 | null | Validated | null | null |
null | {
"abstract": " In order to address the need for an affordable reduced gravity test platform,\nthis work focuses on the analysis and implementation of atmospheric\nacceleration tracking with an autonomous aerial vehicle. As proof of concept,\nthe vehicle is designed with the objective of flying accurate reduced-gravity\nparabolas. Suggestions from both academia and industry were taken into account,\nas well as requirements imposed by a regulatory agency. The novelty of this\nwork is the Proportional Integral Ramp Quadratic PIRQ controller, which is\nemployed to counteract the aerodynamic forces impeding the vehicles constant\nacceleration during the maneuver. The stability of the free-fall maneuver under\nthis controller is studied in detail via the formation of the transverse\ndynamics and the application of the circle criterion. The implementation of\nsuch a controller is then outlined, and the PIRQ controller is validated\nthrough a flight test, where the vehicle successfully tracks Martian gravity\n0.378 G's with a standard deviation of 0.0426.\n",
"title": "Maneuver Regulation for Accelerating Bodies in Atmospheric Environments"
} | null | null | null | null | true | null | 19488 | null | Default | null | null |
null | {
"abstract": " Texture classification is a problem that has various applications such as\nremote sensing and forest species recognition. Solutions tend to be custom fit\nto the dataset used but fails to generalize. The Convolutional Neural Network\n(CNN) in combination with Support Vector Machine (SVM) form a robust selection\nbetween powerful invariant feature extractor and accurate classifier. The\nfusion of experts provides stability in classification rates among different\ndatasets.\n",
"title": "A Hybrid Deep Learning Approach for Texture Analysis"
} | null | null | null | null | true | null | 19489 | null | Default | null | null |
null | {
"abstract": " Contingent Convertible bonds (CoCos) are debt instruments that convert into\nequity or are written down in times of distress. Existing pricing models assume\nconversion triggers based on market prices and on the assumption that markets\ncan always observe all relevant firm information. But all Cocos issued so far\nhave triggers based on accounting ratios and/or regulatory intervention. We\nincorporate that markets receive information through noisy accounting reports\nissued at discrete time instants, which allows us to distinguish between market\nand accounting values, and between automatic triggers and regulator-mandated\nconversions. Our second contribution is to incorporate that coupon payments are\ncontingent too: their payment is conditional on the Maximum Distributable\nAmount not being exceeded. We examine the impact of CoCo design parameters,\nasset volatility and accounting noise on the price of a CoCo; and investigate\nthe interaction between CoCo design features, the capital structure of the\nissuing bank and their implications for risk taking and investment incentives.\nFinally, we use our model to explain the crash in CoCo prices after Deutsche\nBank's profit warning in February 2016.\n",
"title": "Accounting Noise and the Pricing of CoCos"
} | null | null | null | null | true | null | 19490 | null | Default | null | null |
null | {
"abstract": " E-generalization computes common generalizations of given ground terms w.r.t.\na given equational background theory E. In 2005 [arXiv:1403.8118], we had\npresented a computation approach based on standard regular tree grammar\nalgorithms, and a Prolog prototype implementation. In this report, we present\nalgorithmic improvements, prove them correct and complete, and give some\ndetails of an efficiency-oriented implementation in C that allows us to handle\nproblems larger by several orders of magnitude.\n",
"title": "An Improved Algorithm for E-Generalization"
} | null | null | null | null | true | null | 19491 | null | Default | null | null |
null | {
"abstract": " We study quartic double fivefolds from the perspective of Fano manifolds of\nCalabi-Yau type and that of exceptional quaternionic representations. We first\nprove that the generic quartic double fivefold can be represented, in a finite\nnumber of ways, as a double cover of P^5 ramified along a linear section of the\nSp 12-invariant quartic in P^31. Then, using the geometry of the Vinberg's type\nII decomposition of some exceptional quaternionic representations, and backed\nby some cohomological computations performed by Macaulay2, we prove the\nexistence of a spherical rank 6 vector bundle on such a generic quartic double\nfivefold. We finally use the existence this vector bundle to prove that the\nhomological unit of the CY-3 category associated by Kuznetsov to the derived\ncategory of a generic quartic double fivefold is C $\\oplus$ C[3].\n",
"title": "On quartic double fivefolds and the matrix factorizations of exceptional quaternionic representations"
} | null | null | null | null | true | null | 19492 | null | Default | null | null |
null | {
"abstract": " Thermal atmospheric tides can torque telluric planets away from spin-orbit\nsynchronous rotation, as observed in the case of Venus. They thus participate\nto determine the possible climates and general circulations of the atmospheres\nof these planets. In this work, we write the equations governing the dynamics\nof thermal tides in a local vertically-stratified section of a rotating\nplanetary atmosphere by taking into account the effects of the complete\nCoriolis acceleration on tidal waves. This allows us to derive analytically the\ntidal torque and the tidally dissipated energy, which we use to discuss the\npossible regimes of tidal dissipation and examine the key role played by\nstratification.\nIn agreement with early studies, we find that the frequency dependence of the\nthermal atmospheric tidal torque in the vicinity of synchronization can be\napproximated by a Maxwell model. This behaviour corresponds to weakly stably\nstratified or convective fluid layers, as observed in ADLM2016a. A strong\nstable stratification allows gravity waves to propagate, which makes the tidal\ntorque become negligible. The transition is continuous between these two\nregimes. The traditional approximation appears to be valid in thin atmospheres\nand in regimes where the rotation frequency is dominated by the forcing or the\nbuoyancy frequencies.\nDepending on the stability of their atmospheres with respect to convection,\nobserved exoplanets can be tidally driven toward synchronous or asynchronous\nfinal rotation rates. The domain of applicability of the traditional\napproximation is rigorously constrained by calculations.\n",
"title": "Atmospheric thermal tides and planetary spin I. The complex interplay between stratification and rotation"
} | null | null | null | null | true | null | 19493 | null | Default | null | null |
null | {
"abstract": " Obtaining enough labeled data to robustly train complex discriminative models\nis a major bottleneck in the machine learning pipeline. A popular solution is\ncombining multiple sources of weak supervision using generative models. The\nstructure of these models affects training label quality, but is difficult to\nlearn without any ground truth labels. We instead rely on these weak\nsupervision sources having some structure by virtue of being encoded\nprogrammatically. We present Coral, a paradigm that infers generative model\nstructure by statically analyzing the code for these heuristics, thus reducing\nthe data required to learn structure significantly. We prove that Coral's\nsample complexity scales quasilinearly with the number of heuristics and number\nof relations found, improving over the standard sample complexity, which is\nexponential in $n$ for identifying $n^{\\textrm{th}}$ degree relations.\nExperimentally, Coral matches or outperforms traditional structure learning\napproaches by up to 3.81 F1 points. Using Coral to model dependencies instead\nof assuming independence results in better performance than a fully supervised\nmodel by 3.07 accuracy points when heuristics are used to label radiology data\nwithout ground truth labels.\n",
"title": "Inferring Generative Model Structure with Static Analysis"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 19494 | null | Validated | null | null |
null | {
"abstract": " We provide an integral formula for the Maslov index of a pair $(E,F)$ over a\nsurface $\\Sigma$, where $E\\rightarrow\\Sigma$ is a complex vector bundle and\n$F\\subset E_{|\\partial\\Sigma}$ is a totally real subbundle. As in Chern-Weil\ntheory, this formula is written in terms of the curvature of $E$ plus a\nboundary contribution.\nWhen $(E,F)$ is obtained via an immersion of $(\\Sigma,\\partial\\Sigma)$ into a\npair $(M,L)$ where $M$ is Kähler and $L$ is totally real, the formula allows\nus to control the Maslov index in terms of the geometry of $(M,L)$. We exhibit\nnatural conditions on $(M,L)$ which lead to bounds and monotonicity results.\n",
"title": "Maslov, Chern-Weil and Mean Curvature"
} | null | null | [
"Mathematics"
]
| null | true | null | 19495 | null | Validated | null | null |
null | {
"abstract": " As modern precision cosmological measurements continue to show agreement with\nthe broad features of the standard $\\Lambda$-Cold Dark Matter ($\\Lambda$CDM)\ncosmological model, we are increasingly motivated to look for small departures\nfrom the standard model's predictions which might not be detected with standard\napproaches. While searches for extensions and modifications of $\\Lambda$CDM\nhave to date turned up no convincing evidence of beyond-the-standard-model\ncosmology, the list of models compared against $\\Lambda$CDM is by no means\ncomplete and is often governed by readily-coded modifications to standard\nBoltzmann codes. Also, standard goodness-of-fit methods such as a naive\n$\\chi^2$ test fail to put strong pressure on the null $\\Lambda$CDM hypothesis,\nsince modern datasets have orders of magnitudes more degrees of freedom than\n$\\Lambda$CDM. Here we present a method of tuning goodness-of-fit tests to\ndetect potential sub-dominant extra-$\\Lambda$CDM signals present in the data\nthrough compressing observations in a way that maximizes extra-$\\Lambda$CDM\nsignal variation over noise and $\\Lambda$CDM variation. This method, based on a\nKarhunen-Loève transformation of the data, is tuned to be maximally\nsensitive to particular types of variations characteristic of the tuning model;\nbut, unlike direct model comparison, the test is also sensitive to features\nthat only partially mimic the tuning model. As an example of its use, we apply\nthis method in the context of a nonstandard primordial power spectrum compared\nagainst the $2015$ $Planck$ CMB temperature and polarization power spectrum. We\nfind weak evidence of extra-$\\Lambda$CDM physics, conceivably due to known\nsystematics in the 2015 Planck polarization release.\n",
"title": "Tuning Goodness-of-Fit Tests"
} | null | null | null | null | true | null | 19496 | null | Default | null | null |
null | {
"abstract": " Morava $E$-theory $E$ is an $E_\\infty$-ring with an action of the Morava\nstabilizer group $\\Gamma$. We study the derived stack $\\operatorname{Spf}\nE/\\Gamma$. Descent-theoretic techniques allow us to deduce a theorem of\nHopkins-Mahowald-Sadofsky on the $K(n)$-local Picard group, as well as a recent\nresult of Barthel-Beaudry-Stojanoska on the Anderson duals of higher real\n$K$-theories.\n",
"title": "The Lubin-Tate stack and Gross-Hopkins duality"
} | null | null | null | null | true | null | 19497 | null | Default | null | null |
null | {
"abstract": " We consider numerical schemes for root finding of noisy responses through\ngeneralizing the Probabilistic Bisection Algorithm (PBA) to the more practical\ncontext where the sampling distribution is unknown and location-dependent. As\nin standard PBA, we rely on a knowledge state for the approximate posterior of\nthe root location. To implement the corresponding Bayesian updating, we also\ncarry out inference of oracle accuracy, namely learning the probability of\ncorrect response. To this end we utilize batched querying in combination with a\nvariety of frequentist and Bayesian estimators based on majority vote, as well\nas the underlying functional responses, if available. For guiding sampling\nselection we investigate both Information Directed sampling, as well as\nQuantile sampling. Our numerical experiments show that these strategies perform\nquite differently; in particular we demonstrate the efficiency of randomized\nquantile sampling which is reminiscent of Thompson sampling. Our work is\nmotivated by the root-finding sub-routine in pricing of Bermudan financial\nderivatives, illustrated in the last section of the paper.\n",
"title": "Generalized Probabilistic Bisection for Stochastic Root-Finding"
} | null | null | null | null | true | null | 19498 | null | Default | null | null |
null | {
"abstract": " Inspired by the recent evolution of deep neural networks (DNNs) in machine\nlearning, we explore their application to PL-related topics. This paper is the\nfirst step towards this goal; we propose a proof-synthesis method for the\nnegation-free propositional logic in which we use a DNN to obtain a guide of\nproof search. The idea is to view the proof-synthesis problem as a translation\nfrom a proposition to its proof. We train seq2seq, which is a popular network\nin neural machine translation, so that it generates a proof encoded as a\n$\\lambda$-term of a given proposition. We implement the whole framework and\nempirically observe that a generated proof term is close to a correct proof in\nterms of the tree edit distance of AST. This observation justifies using the\noutput from a trained seq2seq model as a guide for proof search.\n",
"title": "Towards Proof Synthesis Guided by Neural Machine Translation for Intuitionistic Propositional Logic"
} | null | null | null | null | true | null | 19499 | null | Default | null | null |
null | {
"abstract": " The hyperbolic Pascal triangle ${\\cal HPT}_{4,q}$ $(q\\ge5)$ is a new\nmathematical construction, which is a geometrical generalization of Pascal's\narithmetical triangle. In the present study we show that a natural pattern of\nrows of ${\\cal HPT}_{4,5}$ is almost the same as the sequence consisting of\nevery second term of the well-known Fibonacci words. Further, we give a\ngeneralization of the Fibonacci words using the hyperbolic Pascal triangles.\nThe geometrical properties of a ${\\cal HPT}_{4,q}$ imply a graph structure\nbetween the finite Fibonacci words.\n",
"title": "Fibonacci words in hyperbolic Pascal triangles"
} | null | null | [
"Computer Science"
]
| null | true | null | 19500 | null | Validated | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.