text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " The distinguishing index of a simple graph $G$, denoted by $D'(G)$, is the\nleast number of labels in an edge labeling of $G$ not preserved by any\nnon-trivial automorphism. It was conjectured by Pilśniak (2015) that for any\n2-connected graph $D'(G) \\leq \\lceil \\sqrt{\\Delta (G)}\\rceil +1$. We prove a\nmore general result for the distinguishing index of graphs with minimum degree\nat least two from which the conjecture follows. Also we present graphs $G$ for\nwhich $D'(G)\\leq \\lceil \\sqrt{\\Delta }\\rceil$.\n",
"title": "An upper bound on the distinguishing index of graphs with minimum degree at least two"
}
| null | null |
[
"Mathematics"
] | null | true | null |
1201
| null |
Validated
| null | null |
null |
{
"abstract": " Pseudo healthy synthesis, i.e. the creation of a subject-specific `healthy'\nimage from a pathological one, could be helpful in tasks such as anomaly\ndetection, understanding changes induced by pathology and disease or even as\ndata augmentation. We treat this task as a factor decomposition problem: we aim\nto separate what appears to be healthy and where disease is (as a map). The two\nfactors are then recombined (by a network) to reconstruct the input disease\nimage. We train our models in an adversarial way using either paired or\nunpaired settings, where we pair disease images and maps (as segmentation\nmasks) when available. We quantitatively evaluate the quality of pseudo healthy\nimages. We show in a series of experiments, performed in ISLES and BraTS\ndatasets, that our method is better than conditional GAN and CycleGAN,\nhighlighting challenges in using adversarial methods in the image translation\ntask of pseudo healthy image generation.\n",
"title": "Adversarial Pseudo Healthy Synthesis Needs Pathology Factorization"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
1202
| null |
Validated
| null | null |
null |
{
"abstract": " Given a poset $P$ and a standard closure operator $\\Gamma:\\wp(P)\\to\\wp(P)$ we\ngive a necessary and sufficient condition for the lattice of $\\Gamma$-closed\nsets of $\\wp(P)$ to be a frame in terms of the recursive construction of the\n$\\Gamma$-closure of sets. We use this condition to show that given a set\n$\\mathcal{U}$ of distinguished joins from $P$, the lattice of\n$\\mathcal{U}$-ideals of $P$ fails to be a frame if and only if it fails to be\n$\\sigma$-distributive, with $\\sigma$ depending on the cardinalities of sets in\n$\\mathcal{U}$. From this we deduce that if a poset has the property that\nwhenever $a\\wedge(b\\vee c)$ is defined for $a,b,c\\in P$ it is necessarily equal\nto $(a\\wedge b)\\vee (a\\wedge c)$, then it has an $(\\omega,3)$-representation.\nThis answers a question from the literature.\n",
"title": "Closure operators, frames, and neatest representations"
}
| null | null | null | null | true | null |
1203
| null |
Default
| null | null |
null |
{
"abstract": " The work is devoted to constructing a wide class of differential-functional\ndynamical systems, whose rich algebraic structure makes their integrability\nanalytically effective. In particular, there is analyzed in detail the operator\nLax type equations for factorized seed elements, there is proved an important\ntheorem about their operator factorization and the related analytical solution\nscheme to the corresponding nonlinear differential-functional dynamical\nsystems.\n",
"title": "The structure of rationally factorized Lax type flows and their analytical integrability"
}
| null | null | null | null | true | null |
1204
| null |
Default
| null | null |
null |
{
"abstract": " Modeling agent behavior is central to understanding the emergence of complex\nphenomena in multiagent systems. Prior work in agent modeling has largely been\ntask-specific and driven by hand-engineering domain-specific prior knowledge.\nWe propose a general learning framework for modeling agent behavior in any\nmultiagent system using only a handful of interaction data. Our framework casts\nagent modeling as a representation learning problem. Consequently, we construct\na novel objective inspired by imitation learning and agent identification and\ndesign an algorithm for unsupervised learning of representations of agent\npolicies. We demonstrate empirically the utility of the proposed framework in\n(i) a challenging high-dimensional competitive environment for continuous\ncontrol and (ii) a cooperative environment for communication, on supervised\npredictive tasks, unsupervised clustering, and policy optimization using deep\nreinforcement learning.\n",
"title": "Learning Policy Representations in Multiagent Systems"
}
| null | null |
[
"Statistics"
] | null | true | null |
1205
| null |
Validated
| null | null |
null |
{
"abstract": " We design a jamming-resistant receiver scheme to enhance the robustness of a\nmassive MIMO uplink system against jamming. We assume that a jammer attacks the\nsystem both in the pilot and data transmission phases. The key feature of the\nproposed scheme is that, in the pilot phase, we estimate not only the\nlegitimate channel, but also the jamming channel by exploiting a purposely\nunused pilot sequence. The jamming channel estimate is used to constructed\nlinear receive filters that reject the impact of the jamming signal. The\nperformance of the proposed scheme is analytically evaluated using asymptotic\nproperties of massive MIMO. The optimal regularized zero-forcing receiver and\nthe optimal power allocation are also studied. Numerical results are provided\nto verify our analysis and show that the proposed scheme greatly improves the\nachievable rates, as compared to conventional receivers. Interestingly, the\nproposed scheme works particularly well under strong jamming attacks, since the\nimproved estimate of the jamming channel outweighs the extra jamming power.\n",
"title": "Jamming-Resistant Receivers for the Massive MIMO Uplink"
}
| null | null | null | null | true | null |
1206
| null |
Default
| null | null |
null |
{
"abstract": " The behavior of many complex systems is determined by a core of densely\ninterconnected units. While many methods are available to identify the core of\na network when connections between nodes are all of the same type, a principled\napproach to define the core when multiple types of connectivity are allowed is\nstill lacking. Here we introduce a general framework to define and extract the\ncore-periphery structure of multi-layer networks by explicitly taking into\naccount the connectivity of the nodes at each layer. We show how our method\nworks on synthetic networks with different size, density, and overlap between\nthe cores at the different layers. We then apply the method to multiplex brain\nnetworks whose layers encode information both on the anatomical and the\nfunctional connectivity among regions of the human cortex. Results confirm the\npresence of the main known hubs, but also suggest the existence of novel brain\ncore regions that have been discarded by previous analysis which focused\nexclusively on the structural layer. Our work is a step forward in the\nidentification of the core of the human connectome, and contributes to shed\nlight to a fundamental question in modern neuroscience.\n",
"title": "Multiplex core-periphery organization of the human connectome"
}
| null | null | null | null | true | null |
1207
| null |
Default
| null | null |
null |
{
"abstract": " Clause Learning is one of the most important components of a conflict driven\nclause learning (CDCL) SAT solver that is effective on industrial instances.\nSince the number of learned clauses is proved to be exponential in the worse\ncase, it is necessary to identify the most relevant clauses to maintain and\ndelete the irrelevant ones. As reported in the literature, several learned\nclauses deletion strategies have been proposed. However the diversity in both\nthe number of clauses to be removed at each step of reduction and the results\nobtained with each strategy creates confusion to determine which criterion is\nbetter. Thus, the problem to select which learned clauses are to be removed\nduring the search step remains very challenging. In this paper, we propose a\nnovel approach to identify the most relevant learned clauses without favoring\nor excluding any of the proposed measures, but by adopting the notion of\ndominance relationship among those measures. Our approach bypasses the problem\nof the diversity of results and reaches a compromise between the assessments of\nthese measures. Furthermore, the proposed approach also avoids another\nnon-trivial problem which is the amount of clauses to be deleted at each\nreduction of the learned clause database.\n",
"title": "Towards Learned Clauses Database Reduction Strategies Based on Dominance Relationship"
}
| null | null | null | null | true | null |
1208
| null |
Default
| null | null |
null |
{
"abstract": " We consider the squared singular values of the product of $M$ standard\ncomplex Gaussian matrices. Since the squared singular values form a\ndeterminantal point process with a particular Meijer G-function kernel, the gap\nprobabilities are given by a Fredholm determinant based on this kernel. It was\nshown by Strahov \\cite{St14} that a hard edge scaling limit of the gap\nprobabilities is described by Hamiltonian differential equations which can be\nformulated as an isomonodromic deformation system similar to the theory of the\nKyoto school. We generalize this result to the case of finite matrices by first\nfinding a representation of the finite kernel in integrable form. As a result\nwe obtain the Hamiltonian structure for a finite size matrices and formulate it\nin terms of a $(M+1) \\times (M+1)$ matrix Schlesinger system. The case $M=1$\nreproduces the Tracy and Widom theory which results in the Painlevé V\nequation for the $(0,s)$ gap probability. Some integrals of motion for $M = 2$\nare identified, and a coupled system of differential equations in two unknowns\nis presented which uniquely determines the corresponding $(0,s)$ gap\nprobability.\n",
"title": "Integrable structure of products of finite complex Ginibre random matrices"
}
| null | null | null | null | true | null |
1209
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider the 3D primitive equations of oceanic and\natmospheric dynamics with only horizontal eddy viscosities in the horizontal\nmomentum equations and only vertical diffusivity in the temperature equation.\nGlobal well-posedness of strong solutions is established for any initial data\nsuch that the initial horizontal velocity $v_0\\in H^2(\\Omega)$ and the initial\ntemperature $T_0\\in H^1(\\Omega)\\cap L^\\infty(\\Omega)$ with $\\nabla_HT_0\\in\nL^q(\\Omega)$, for some $q\\in(2,\\infty)$. Moreover, the strong solutions enjoy\ncorrespondingly more regularities if the initial temperature belongs to\n$H^2(\\Omega)$. The main difficulties are the absence of the vertical viscosity\nand the lack of the horizontal diffusivity, which, interact with each other,\nthus causing the \"\\,mismatching\\,\" of regularities between the horizontal\nmomentum and temperature equations. To handle this \"mismatching\" of\nregularities, we introduce several auxiliary functions, i.e., $\\eta, \\theta,\n\\varphi,$ and $\\psi$ in the paper, which are the horizontal curls or some\nappropriate combinations of the temperature with the horizontal divergences of\nthe horizontal velocity $v$ or its vertical derivative $\\partial_zv$. To\novercome the difficulties caused by the absence of the horizontal diffusivity,\nwhich leads to the requirement of some $L^1_t(W^{1,\\infty}_\\textbf{x})$-type a\npriori estimates on $v$, we decompose the velocity into the\n\"temperature-independent\" and temperature-dependent parts and deal with them in\ndifferent ways, by using the logarithmic Sobolev inequalities of the\nBrézis-Gallouet-Wainger and Beale-Kato-Majda types, respectively.\nSpecifically, a logarithmic Sobolev inequality of the limiting type, introduced\nin our previous work [12], is used, and a new logarithmic type Gronwall\ninequality is exploited.\n",
"title": "Global well-posedness of the 3D primitive equations with horizontal viscosity and vertical diffusivity"
}
| null | null | null | null | true | null |
1210
| null |
Default
| null | null |
null |
{
"abstract": " The Eisenhart geometric formalism, which transforms an Euclidean natural\nHamiltonian $H=T+V$ into a geodesic Hamiltonian ${\\cal T}$ with one additional\ndegree of freedom, is applied to the four families of quadratically\nsuperintegrable systems with multiple separability in the Euclidean plane.\nFirstly, the separability and superintegrability of such four geodesic\nHamiltonians ${\\cal T}_r$ ($r=a,b,c,d$) in a three-dimensional curved space are\nstudied and then these four systems are modified with the addition of a\npotential ${\\cal U}_r$ leading to ${\\cal H}_r={\\cal T}_r +{\\cal U}_r$.\nSecondly, we study the superintegrability of the four Hamiltonians\n$\\widetilde{\\cal H}_r= {\\cal H}_r/ \\mu_r$, where $\\mu_r$ is a certain\nposition-dependent mass, that enjoys the same separability as the original\nsystem ${\\cal H}_r$. All the Hamiltonians here studied describe superintegrable\nsystems on non-Euclidean three-dimensional manifolds with a broken spherically\nsymmetry.\n",
"title": "Superintegrable systems on 3-dimensional curved spaces: Eisenhart formalism and separability"
}
| null | null | null | null | true | null |
1211
| null |
Default
| null | null |
null |
{
"abstract": " This paper formulates a time-varying social-welfare maximization problem for\ndistribution grids with distributed energy resources (DERs) and develops online\ndistributed algorithms to identify (and track) its solutions. In the considered\nsetting, network operator and DER-owners pursue given operational and economic\nobjectives, while concurrently ensuring that voltages are within prescribed\nlimits. The proposed algorithm affords an online implementation to enable\ntracking of the solutions in the presence of time-varying operational\nconditions and changing optimization objectives. It involves a strategy where\nthe network operator collects voltage measurements throughout the feeder to\nbuild incentive signals for the DER-owners in real time; DERs then adjust the\ngenerated/consumed powers in order to avoid the violation of the voltage\nconstraints while maximizing given objectives. The stability of the proposed\nschemes is analytically established and numerically corroborated.\n",
"title": "An Incentive-Based Online Optimization Framework for Distribution Grids"
}
| null | null | null | null | true | null |
1212
| null |
Default
| null | null |
null |
{
"abstract": " The object of the present paper is to study some types of Ricci\npseudosymmetric $(LCS)_n$-manifolds whose metric is Ricci soliton. We found the\nconditions when Ricci soliton on concircular Ricci pseudosymmetric, projective\nRicci pseudosymmetric, $W_{3}$-Ricci pseudosymmetric, conharmonic Ricci\npseudosymmetric, conformal Ricci pseudosymmetric $(LCS)_n$-manifolds to be\nshrinking, steady and expanding. We also construct an example of concircular\nRicci pseudosymmetric $(LCS)_3$-manifold whose metric is Ricci soliton.\n",
"title": "Ricci solitons on Ricci pseudosymmetric $(LCS)_n$-manifolds"
}
| null | null | null | null | true | null |
1213
| null |
Default
| null | null |
null |
{
"abstract": " This paper gives drastically faster gossip algorithms to compute exact and\napproximate quantiles.\nGossip algorithms, which allow each node to contact a uniformly random other\nnode in each round, have been intensely studied and been adopted in many\napplications due to their fast convergence and their robustness to failures.\nKempe et al. [FOCS'03] gave gossip algorithms to compute important aggregate\nstatistics if every node is given a value. In particular, they gave a beautiful\n$O(\\log n + \\log \\frac{1}{\\epsilon})$ round algorithm to $\\epsilon$-approximate\nthe sum of all values and an $O(\\log^2 n)$ round algorithm to compute the exact\n$\\phi$-quantile, i.e., the the $\\lceil \\phi n \\rceil$ smallest value.\nWe give an quadratically faster and in fact optimal gossip algorithm for the\nexact $\\phi$-quantile problem which runs in $O(\\log n)$ rounds. We furthermore\nshow that one can achieve an exponential speedup if one allows for an\n$\\epsilon$-approximation. We give an $O(\\log \\log n + \\log \\frac{1}{\\epsilon})$\nround gossip algorithm which computes a value of rank between $\\phi n$ and\n$(\\phi+\\epsilon)n$ at every node.% for any $0 \\leq \\phi \\leq 1$ and $0 <\n\\epsilon < 1$. Our algorithms are extremely simple and very robust - they can\nbe operated with the same running times even if every transmission fails with\na, potentially different, constant probability. We also give a matching\n$\\Omega(\\log \\log n + \\log \\frac{1}{\\epsilon})$ lower bound which shows that\nour algorithm is optimal for all values of $\\epsilon$.\n",
"title": "Optimal Gossip Algorithms for Exact and Approximate Quantile Computations"
}
| null | null | null | null | true | null |
1214
| null |
Default
| null | null |
null |
{
"abstract": " Magnesium and its alloys are ideal for biodegradable implants due to their\nbiocompatibility and their low-stress shielding. However, they can corrode too\nrapidly in the biological environment. The objective of this research was to\ndevelop heat treatments to slow the corrosion of high purified magnesium and\nAZ31 alloy in simulated body fluid at 37°C. Heat treatments were performed\nat different temperatures and times. Hydrogen evolution, weight loss, PDP, and\nEIS methods were used to measure the corrosion rates. Results show that heat\ntreating can increase the corrosion resistance of HP-Mg by 2x and AZ31 by 10x.\n",
"title": "Influence of Heat Treatment on the Corrosion Behavior of Purified Magnesium and AZ31 Alloy"
}
| null | null |
[
"Physics"
] | null | true | null |
1215
| null |
Validated
| null | null |
null |
{
"abstract": " Publishing reproducible analyses is a long-standing and widespread challenge\nfor the scientific community, funding bodies and publishers. Although a\ndefinitive solution is still elusive, the problem is recognized to affect all\ndisciplines and lead to a critical system inefficiency. Here, we propose a\nblockchain-based approach to enhance scientific reproducibility, with a focus\non life science studies and precision medicine. While the interest of encoding\npermanently into an immutable ledger all the study key information-including\nendpoints, data and metadata, protocols, analytical methods and all\nfindings-has been already highlighted, here we apply the blockchain approach to\nsolve the issue of rewarding time and expertise of scientists that commit to\nverify reproducibility. Our mechanism builds a trustless ecosystem of\nresearchers, funding bodies and publishers cooperating to guarantee digital and\npermanent access to information and reproducible results. As a natural\nbyproduct, a procedure to quantify scientists' and institutions' reputation for\nranking purposes is obtained.\n",
"title": "Towards a scientific blockchain framework for reproducible data analysis"
}
| null | null | null | null | true | null |
1216
| null |
Default
| null | null |
null |
{
"abstract": " Google uses continuous streams of data from industry partners in order to\ndeliver accurate results to users. Unexpected drops in traffic can be an\nindication of an underlying issue and may be an early warning that remedial\naction may be necessary. Detecting such drops is non-trivial because streams\nare variable and noisy, with roughly regular spikes (in many different shapes)\nin traffic data. We investigated the question of whether or not we can predict\nanomalies in these data streams. Our goal is to utilize Machine Learning and\nstatistical approaches to classify anomalous drops in periodic, but noisy,\ntraffic patterns. Since we do not have a large body of labeled examples to\ndirectly apply supervised learning for anomaly classification, we approached\nthe problem in two parts. First we used TensorFlow to train our various models\nincluding DNNs, RNNs, and LSTMs to perform regression and predict the expected\nvalue in the time series. Secondly we created anomaly detection rules that\ncompared the actual values to predicted values. Since the problem requires\nfinding sustained anomalies, rather than just short delays or momentary\ninactivity in the data, our two detection methods focused on continuous\nsections of activity rather than just single points. We tried multiple\ncombinations of our models and rules and found that using the intersection of\nour two anomaly detection methods proved to be an effective method of detecting\nanomalies on almost all of our models. In the process we also found that not\nall data fell within our experimental assumptions, as one data stream had no\nperiodicity, and therefore no time based model could predict it.\n",
"title": "Time Series Anomaly Detection; Detection of anomalous drops with limited features and sparse examples in noisy highly periodic data"
}
| null | null | null | null | true | null |
1217
| null |
Default
| null | null |
null |
{
"abstract": " To obtain a better understanding of the trade-offs between various\nobjectives, Bi-Objective Integer Programming (BOIP) algorithms calculate the\nset of all non-dominated vectors and present these as the solution to a BOIP\nproblem. Historically, these algorithms have been compared in terms of the\nnumber of single-objective IPs solved and total CPU time taken to produce the\nsolution to a problem. This is equitable, as researchers can often have access\nto widely differing amounts of computing power. However, the real world has\nrecently seen a large uptake of multi-core processors in computers, laptops,\ntablets and even mobile phones. With this in mind, we look at how to best\nutilise parallel processing to improve the elapsed time of optimisation\nalgorithms. We present two methods of parallelising the recursive algorithm\npresented by Ozlen, Burton and MacRae. Both new methods utilise two threads and\nimprove running times. One of the new methods, the Meeting algorithm, halves\nrunning time to achieve near-perfect parallelisation. The results are compared\nwith the efficiency of parallelisation within the commercial IP solver IBM ILOG\nCPLEX, and the new methods are both shown to perform better.\n",
"title": "A parallel approach to bi-objective integer programming"
}
| null | null | null | null | true | null |
1218
| null |
Default
| null | null |
null |
{
"abstract": " The adaptive zero-error capacity of discrete memoryless channels (DMC) with\nnoiseless feedback has been shown to be positive whenever there exists at least\none channel output \"disprover\", i.e. a channel output that cannot be reached\nfrom at least one of the inputs. Furthermore, whenever there exists a\ndisprover, the adaptive zero-error capacity attains the Shannon (small-error)\ncapacity. Here, we study the zero-error capacity of a DMC when the channel\nfeedback is noisy rather than perfect. We show that the adaptive zero-error\ncapacity with noisy feedback is lower bounded by the forward channel's\nzero-undetected error capacity, and show that under certain conditions this is\ntight.\n",
"title": "The adaptive zero-error capacity for a class of channels with noisy feedback"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1219
| null |
Validated
| null | null |
null |
{
"abstract": " With increasing complexity and heterogeneity of computing devices, it has\nbecome crucial for system to be autonomous, adaptive to dynamic environment,\nrobust, flexible, and having so called self-*properties. These autonomous\nsystems are called organic computing(OC) systems. OC system was proposed as a\nsolution to tackle complex systems. Design time decisions have been shifted to\nrun time in highly complex and interconnected systems as it is very hard to\nconsider all scenarios and their appropriate actions in advance. Consequently,\nSelf-awareness becomes crucial for these adaptive autonomous systems. To cope\nwith evolving environment and changing user needs, system need to have\nknowledge about itself and its surroundings. Literature review shows that for\nautonomous and intelligent systems, researchers are concerned about knowledge\nacquisition, representation and learning which is necessary for a system to\nadapt. This paper is written to compare self-awareness and organic computing by\ndiscussing their definitions, properties and architecture.\n",
"title": "Comparison of Self-Aware and Organic Computing Systems"
}
| null | null | null | null | true | null |
1220
| null |
Default
| null | null |
null |
{
"abstract": " We focus on nonconvex and nonsmooth minimization problems with a composite\nobjective, where the differentiable part of the objective is freed from the\nusual and restrictive global Lipschitz gradient continuity assumption. This\nlongstanding smoothness restriction is pervasive in first order methods (FOM),\nand was recently circumvent for convex composite optimization by Bauschke,\nBolte and Teboulle, through a simple and elegant framework which captures, all\nat once, the geometry of the function and of the feasible set. Building on this\nwork, we tackle genuine nonconvex problems. We first complement and extend\ntheir approach to derive a full extended descent lemma by introducing the\nnotion of smooth adaptable functions. We then consider a Bregman-based proximal\ngradient methods for the nonconvex composite model with smooth adaptable\nfunctions, which is proven to globally converge to a critical point under\nnatural assumptions on the problem's data. To illustrate the power and\npotential of our general framework and results, we consider a broad class of\nquadratic inverse problems with sparsity constraints which arises in many\nfundamental applications, and we apply our approach to derive new globally\nconvergent schemes for this class.\n",
"title": "First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems"
}
| null | null | null | null | true | null |
1221
| null |
Default
| null | null |
null |
{
"abstract": " In online discussion communities, users can interact and share information\nand opinions on a wide variety of topics. However, some users may create\nmultiple identities, or sockpuppets, and engage in undesired behavior by\ndeceiving others or manipulating discussions. In this work, we study\nsockpuppetry across nine discussion communities, and show that sockpuppets\ndiffer from ordinary users in terms of their posting behavior, linguistic\ntraits, as well as social network structure. Sockpuppets tend to start fewer\ndiscussions, write shorter posts, use more personal pronouns such as \"I\", and\nhave more clustered ego-networks. Further, pairs of sockpuppets controlled by\nthe same individual are more likely to interact on the same discussion at the\nsame time than pairs of ordinary users. Our analysis suggests a taxonomy of\ndeceptive behavior in discussion communities. Pairs of sockpuppets can vary in\ntheir deceptiveness, i.e., whether they pretend to be different users, or their\nsupportiveness, i.e., if they support arguments of other sockpuppets controlled\nby the same user. We apply these findings to a series of prediction tasks,\nnotably, to identify whether a pair of accounts belongs to the same underlying\nuser or not. Altogether, this work presents a data-driven view of deception in\nonline discussion communities and paves the way towards the automatic detection\nof sockpuppets.\n",
"title": "An Army of Me: Sockpuppets in Online Discussion Communities"
}
| null | null | null | null | true | null |
1222
| null |
Default
| null | null |
null |
{
"abstract": " Bayesian optimization has recently attracted the attention of the automatic\nmachine learning community for its excellent results in hyperparameter tuning.\nBO is characterized by the sample efficiency with which it can optimize\nexpensive black-box functions. The efficiency is achieved in a similar fashion\nto the learning to learn methods: surrogate models (typically in the form of\nGaussian processes) learn the target function and perform intelligent sampling.\nThis surrogate model can be applied even in the presence of noise; however, as\nwith most regression methods, it is very sensitive to outlier data. This can\nresult in erroneous predictions and, in the case of BO, biased and inefficient\nexploration. In this work, we present a GP model that is robust to outliers\nwhich uses a Student-t likelihood to segregate outliers and robustly conduct\nBayesian optimization. We present numerical results evaluating the proposed\nmethod in both artificial functions and real problems.\n",
"title": "Robust Bayesian Optimization with Student-t Likelihood"
}
| null | null | null | null | true | null |
1223
| null |
Default
| null | null |
null |
{
"abstract": " We propose a map-aided vehicle localization method for GPS-denied\nenvironments. This approach exploits prior knowledge of the road grade map and\nvehicle on-board sensor measurements to accurately estimate the longitudinal\nposition of the vehicle. Real-time localization is crucial to systems that\nutilize position-dependent information for planning and control. We validate\nthe effectiveness of the localization method on a hierarchical control system.\nThe higher level planner optimizes the vehicle velocity to minimize the energy\nconsumption for a given route by employing traffic condition and road grade\ndata. The lower level is a cruise control system that tracks the\nposition-dependent optimal reference velocity. Performance of the proposed\nlocalization algorithm is evaluated using both simulations and experiments.\n",
"title": "Vehicle Localization and Control on Roads with Prior Grade Map"
}
| null | null | null | null | true | null |
1224
| null |
Default
| null | null |
null |
{
"abstract": " We present an adaptive grasping method that finds stable grasps on novel\nobjects. The main contributions of this paper is in the computation of the\nprobability of success of grasps in the vicinity of an already applied grasp.\nOur method performs grasp adaptions by simulating tactile data for grasps in\nthe vicinity of the current grasp. The simulated data is used to evaluate\nhypothetical grasps and thereby guide us toward better grasps. We demonstrate\nthe applicability of our method by constructing a system that can plan, apply\nand adapt grasps on novel objects. Experiments are conducted on objects from\nthe YCB object set and the success rate of our method is 88%. Our experiments\nshow that the application of our grasp adaption method improves grasp stability\nsignificantly.\n",
"title": "Estimating Tactile Data for Adaptive Grasping of Novel Objects"
}
| null | null | null | null | true | null |
1225
| null |
Default
| null | null |
null |
{
"abstract": " Algorithm-dependent generalization error bounds are central to statistical\nlearning theory. A learning algorithm may use a large hypothesis space, but the\nlimited number of iterations controls its model capacity and generalization\nerror. The impacts of stochastic gradient methods on generalization error for\nnon-convex learning problems not only have important theoretical consequences,\nbut are also critical to generalization errors of deep learning.\nIn this paper, we study the generalization errors of Stochastic Gradient\nLangevin Dynamics (SGLD) with non-convex objectives. Two theories are proposed\nwith non-asymptotic discrete-time analysis, using Stability and PAC-Bayesian\nresults respectively. The stability-based theory obtains a bound of\n$O\\left(\\frac{1}{n}L\\sqrt{\\beta T_k}\\right)$, where $L$ is uniform Lipschitz\nparameter, $\\beta$ is inverse temperature, and $T_k$ is aggregated step sizes.\nFor PAC-Bayesian theory, though the bound has a slower $O(1/\\sqrt{n})$ rate,\nthe contribution of each step is shown with an exponentially decaying factor by\nimposing $\\ell^2$ regularization, and the uniform Lipschitz constant is also\nreplaced by actual norms of gradients along trajectory. Our bounds have no\nimplicit dependence on dimensions, norms or other capacity measures of\nparameter, which elegantly characterizes the phenomenon of \"Fast Training\nGuarantees Generalization\" in non-convex settings. This is the first\nalgorithm-dependent result with reasonable dependence on aggregated step sizes\nfor non-convex learning, and has important implications to statistical learning\naspects of stochastic gradient methods in complicated models such as deep\nlearning.\n",
"title": "Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical Viewpoints"
}
| null | null | null | null | true | null |
1226
| null |
Default
| null | null |
null |
{
"abstract": " The Deep Impact spacecraft fly-by of comet 103P/Hartley 2 occurred on 2010\nNovember 4, one week after perihelion with a closest approach (CA) distance of\nabout 700 km. We used narrowband images obtained by the Medium Resolution\nImager (MRI) onboard the spacecraft to study the gas and dust in the innermost\ncoma. We derived an overall dust reddening of 15\\%/100 nm between 345 and 749\nnm and identified a blue enhancement in the dust coma in the sunward direction\nwithin 5 km from the nucleus, which we interpret as a localized enrichment in\nwater ice. OH column density maps show an anti-sunward enhancement throughout\nthe encounter except for the highest resolution images, acquired at CA, where a\nradial jet becomes visible in the innermost coma, extending up to 12 km from\nthe nucleus. The OH distribution in the inner coma is very different from that\nexpected for a fragment species. Instead, it correlates well with the water\nvapor map derived by the HRI-IR instrument onboard Deep Impact\n\\citep{AHearn2011}. Radial profiles of the OH column density and derived water\nproduction rates show an excess of OH emission during CA that cannot be\nexplained with pure fluorescence. We attribute this excess to a prompt emission\nprocess where photodissociation of H$_2$O directly produces excited\nOH*($A^2\\it{\\Sigma}^+$) radicals. Our observations provide the first direct\nimaging of Near-UV prompt emission of OH. We therefore suggest the use of a\ndedicated filter centered at 318.8 nm to directly trace the water in the coma\nof comets.\n",
"title": "Near-UV OH Prompt Emission in the Innermost Coma of 103P/Hartley 2"
}
| null | null |
[
"Physics"
] | null | true | null |
1227
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we suggest a macroscopic toy system in which a potential-like\nenergy is generated by a non-uniform pulsation of the medium (i.e. pulsation of\ntransverse standing oscillations that the elastic medium of the system tends to\nsupport at each point). This system is inspired by walking droplets experiments\nwith submerged barriers. We first show that a Poincaré-Lorentz covariant\nformalization of the system causes inconsistency and contradiction. The\ncontradiction is solved by using a general covariant formulation and by\nassuming a relation between the metric associated with the elastic medium and\nthe pulsation of the medium. (Calculations are performed in a Newtonian-like\nmetric, constant in time). We find ($i$) an effective Schrödinger equation\nwith external potential, ($ii$) an effective de Broglie-Bohm guidance formula\nand ($iii$) an energy of the `particle' which has a direct counterpart in\ngeneral relativity as well as in quantum mechanics. We analyze the wave and the\n`particle' in an effective free fall and with a harmonic potential. This\npotential-like energy is an effective gravitational potential, rooted in the\npulsation of the medium at each point. The latter, also conceivable as a\nnatural clock, makes easy to understand why proper time varies from place to\nplace.\n",
"title": "Effective gravity and effective quantum equations in a system inspired by walking droplets experiments"
}
| null | null | null | null | true | null |
1228
| null |
Default
| null | null |
null |
{
"abstract": " We consider the class of measurable functions defined in all of\n$\\mathbb{R}^n$ that give rise to a nonlocal minimal graph over a ball of\n$\\mathbb{R}^n$. We establish that the gradient of any such function is bounded\nin the interior of the ball by a power of its oscillation. This estimate,\ntogether with previously known results, leads to the $C^\\infty$ regularity of\nthe function in the ball. While the smoothness of nonlocal minimal graphs was\nknown for $n = 1, 2$ (but without a quantitative bound), in higher dimensions\nonly their continuity had been established.\nTo prove the gradient bound, we show that the normal to a nonlocal minimal\ngraph is a supersolution of a truncated fractional Jacobi operator, for which\nwe prove a weak Harnack inequality. To this end, we establish a new universal\nfractional Sobolev inequality on nonlocal minimal surfaces.\nOur estimate provides an extension to the fractional setting of the\ncelebrated gradient bounds of Finn and of Bombieri, De Giorgi & Miranda for\nsolutions of the classical mean curvature equation.\n",
"title": "A gradient estimate for nonlocal minimal graphs"
}
| null | null | null | null | true | null |
1229
| null |
Default
| null | null |
null |
{
"abstract": " We carried out a Bayesian homogeneous determination of the orbital parameters\nof 231 transiting giant planets (TGPs) that are alone or have distant\ncompanions; we employed DE-MCMC methods to analyse radial-velocity (RV) data\nfrom the literature and 782 new high-accuracy RVs obtained with the HARPS-N\nspectrograph for 45 systems over 3 years. Our work yields the largest sample of\nsystems with a transiting giant exoplanet and coherently determined orbital,\nplanetary, and stellar parameters. We found that the orbital parameters of TGPs\nin non-compact planetary systems are clearly shaped by tides raised by their\nhost stars. Indeed, the most eccentric planets have relatively large orbital\nseparations and/or high mass ratios, as expected from the equilibrium tide\ntheory. This feature would be the outcome of high-eccentricity migration (HEM).\nThe distribution of $\\alpha=a/a_R$, where $a$ and $a_R$ are the semi-major axis\nand the Roche limit, for well-determined circular orbits peaks at 2.5; this\nalso agrees with expectations from the HEM. The few planets of our sample with\ncircular orbits and $\\alpha >5$ values may have migrated through disc-planet\ninteractions instead of HEM. By comparing circularisation times with stellar\nages, we found that hot Jupiters with $a < 0.05$ au have modified tidal quality\nfactors $10^{5} < Q'_p < 10^{9}$, and that stellar $Q'_s > 10^{6}-10^{7}$ are\nrequired to explain the presence of eccentric planets at the same orbital\ndistance. As a by-product of our analysis, we detected a non-zero eccentricity\nfor HAT-P-29; we determined that five planets that were previously regarded to\nhave hints of non-zero eccentricity have circular orbits or undetermined\neccentricities; we unveiled curvatures caused by distant companions in the RV\ntime series of HAT-P-2, HAT-P-22, and HAT-P-29; and we revised the planetary\nparameters of CoRoT-1b.\n",
"title": "The GAPS Programme with HARPS-N@TNG XIV. Investigating giant planet migration history via improved eccentricity and mass determination for 231 transiting planets"
}
| null | null | null | null | true | null |
1230
| null |
Default
| null | null |
null |
{
"abstract": " This paper is the first one in a series of three dealing with the concept of\ninjective stabilization of the tensor product and its applications. Its primary\ngoal is to collect known facts and establish a basic operational calculus that\nwill be used in the subsequent parts. This is done in greater generality than\nis necessary for the stated goal. Several results of independent interest are\nalso established. They include, among other things, connections with\nsatellites, an explicit construction of the stabilization of a finitely\npresented functor, various exactness properties of the injectively stable\nfunctors, a construction, from a functor and a short exact sequence, of a\ndoubly-infinite exact sequence by splicing the injective stabilization of the\nfunctor and its derived functors. When specialized to the tensor product with a\nfinitely presented module, the injective stabilization with coefficients in the\nring is isomorphic to the 1-torsion functor. The Auslander-Reiten formula is\nextended to a more general formula, which holds for arbitrary (i.e., not\nnecessarily finite) modules over arbitrary associative rings with identity.\nWeakening of the assumptions in the theorems of Eilenberg and Watts leads to\ncharacterizations of the requisite zeroth derived functors.\nThe subsequent papers, provide applications of the developed techniques.\nPart~II deals with new notions of torsion module and cotorsion module of a\nmodule. This is done for arbitrary modules over arbitrary rings. Part~III\nintroduces a new concept, called the asymptotic stabilization of the tensor\nproduct. The result is closely related to different variants of stable homology\n(these are generalizations of Tate homology to arbitrary rings). A comparison\ntransformation from Vogel homology to the asymptotic stabilization of the\ntensor product is constructed and shown to be epic.\n",
"title": "Injective stabilization of additive functors. I. Preliminaries"
}
| null | null | null | null | true | null |
1231
| null |
Default
| null | null |
null |
{
"abstract": " Eigenvector centrality is a standard network analysis tool for determining\nthe importance of (or ranking of) entities in a connected system that is\nrepresented by a graph. However, many complex systems and datasets have natural\nmulti-way interactions that are more faithfully modeled by a hypergraph. Here\nwe extend the notion of graph eigenvector centrality to uniform hypergraphs.\nTraditional graph eigenvector centralities are given by a positive eigenvector\nof the adjacency matrix, which is guaranteed to exist by the Perron-Frobenius\ntheorem under some mild conditions. The natural representation of a hypergraph\nis a hypermatrix (colloquially, a tensor). Using recently established\nPerron-Frobenius theory for tensors, we develop three tensor eigenvectors\ncentralities for hypergraphs, each with different interpretations. We show that\nthese centralities can reveal different information on real-world data by\nanalyzing hypergraphs constructed from n-gram frequencies, co-tagging on stack\nexchange, and drug combinations observed in patient emergency room visits.\n",
"title": "Three hypergraph eigenvector centralities"
}
| null | null | null | null | true | null |
1232
| null |
Default
| null | null |
null |
{
"abstract": " Neural networks allow Q-learning reinforcement learning agents such as deep\nQ-networks (DQN) to approximate complex mappings from state spaces to value\nfunctions. However, this also brings drawbacks when compared to other function\napproximators such as tile coding or their generalisations, radial basis\nfunctions (RBF) because they introduce instability due to the side effect of\nglobalised updates present in neural networks. This instability does not even\nvanish in neural networks that do not have any hidden layers. In this paper, we\nshow that simple modifications to the structure of the neural network can\nimprove stability of DQN learning when a multi-layer perceptron is used for\nfunction approximation.\n",
"title": "Reinforcement Learning using Augmented Neural Networks"
}
| null | null | null | null | true | null |
1233
| null |
Default
| null | null |
null |
{
"abstract": " We perform a detailed analytical study of the Recent Fluid Deformation (RFD)\nmodel for the onset of Lagrangian intermittency, within the context of the\nMartin-Siggia-Rose-Janssen-de Dominicis (MSRJD) path integral formalism. The\nmodel is based, as a key point, upon local closures for the pressure Hessian\nand the viscous dissipation terms in the stochastic dynamical equations for the\nvelocity gradient tensor. We carry out a power counting hierarchical\nclassification of the several perturbative contributions associated to\nfluctuations around the instanton-evaluated MSRJD action, along the lines of\nthe cumulant expansion. The most relevant Feynman diagrams are then integrated\nout into the renormalized effective action, for the computation of velocity\ngradient probability distribution functions (vgPDFs). While the subleading\nperturbative corrections do not affect the global shape of the vgPDFs in an\nappreciable qualitative way, it turns out that they have a significant role in\nthe accurate description of their non-Gaussian cores.\n",
"title": "Instantons and Fluctuations in a Lagrangian Model of Turbulence"
}
| null | null | null | null | true | null |
1234
| null |
Default
| null | null |
null |
{
"abstract": " We study the heavy path decomposition of conditional Galton-Watson trees. In\na standard Galton-Watson tree conditional on its size $n$, we order all\nchildren by their subtree sizes, from large (heavy) to small. A node is marked\nif it is among the $k$ heaviest nodes among its siblings. Unmarked nodes and\ntheir subtrees are removed, leaving only a tree of marked nodes, which we call\nthe $k$-heavy tree. We study various properties of these trees, including their\nsize and the maximal distance from any original node to the $k$-heavy tree. In\nparticular, under some moment condition, the $2$-heavy tree is with high\nprobability larger than $cn$ for some constant $c > 0$, and the maximal\ndistance from the $k$-heavy tree is $O(n^{1/(k+1)})$ in probability. As a\nconsequence, for uniformly random Apollonian networks of size $n$, the expected\nsize of the longest simple path is $\\Omega(n)$.\n",
"title": "The heavy path approach to Galton-Watson trees with an application to Apollonian networks"
}
| null | null | null | null | true | null |
1235
| null |
Default
| null | null |
null |
{
"abstract": " We consider a firm that sells a large number of products to its customers in\nan online fashion. Each product is described by a high dimensional feature\nvector, and the market value of a product is assumed to be linear in the values\nof its features. Parameters of the valuation model are unknown and can change\nover time. The firm sequentially observes a product's features and can use the\nhistorical sales data (binary sale/no sale feedbacks) to set the price of\ncurrent product, with the objective of maximizing the collected revenue. We\nmeasure the performance of a dynamic pricing policy via regret, which is the\nexpected revenue loss compared to a clairvoyant that knows the sequence of\nmodel parameters in advance.\nWe propose a pricing policy based on projected stochastic gradient descent\n(PSGD) and characterize its regret in terms of time $T$, features dimension\n$d$, and the temporal variability in the model parameters, $\\delta_t$. We\nconsider two settings. In the first one, feature vectors are chosen\nantagonistically by nature and we prove that the regret of PSGD pricing policy\nis of order $O(\\sqrt{T} + \\sum_{t=1}^T \\sqrt{t}\\delta_t)$. In the second\nsetting (referred to as stochastic features model), the feature vectors are\ndrawn independently from an unknown distribution. We show that in this case,\nthe regret of PSGD pricing policy is of order $O(d^2 \\log T + \\sum_{t=1}^T\nt\\delta_t/d)$.\n",
"title": "Perishability of Data: Dynamic Pricing under Varying-Coefficient Models"
}
| null | null | null | null | true | null |
1236
| null |
Default
| null | null |
null |
{
"abstract": " We present a new algorithm which detects the maximal possible number of\nmatched disjoint pairs satisfying a given caliper when a bipartite matching is\ndone with respect to a scalar index (e.g., propensity score), and constructs a\ncorresponding matching. Variable width calipers are compatible with the\ntechnique, provided that the width of the caliper is a Lipschitz function of\nthe index. If the observations are ordered with respect to the index then the\nmatching needs $O(N)$ operations, where $N$ is the total number of subjects to\nbe matched. The case of 1-to-$n$ matching is also considered.\nWe offer also a new fast algorithm for optimal complete one-to-one matching\non a scalar index when the treatment and control groups are of the same size.\nThis allows us to improve greedy nearest neighbor matching on a scalar index.\nKeywords: propensity score matching, nearest neighbor matching, matching with\ncaliper, variable width caliper.\n",
"title": "A fast algorithm for maximal propensity score matching"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
1237
| null |
Validated
| null | null |
null |
{
"abstract": " We perform a detailed comparison of the Dirac composite fermion and the\nrecently proposed bimetric theory for a quantum Hall Jain states near half\nfilling. By tuning the composite Fermi liquid to the vicinity of a nematic\nphase transition, we find that the two theories are equivalent to each other.\nWe verify that the single mode approximation for the response functions and the\nstatic structure factor becomes reliable near the phase transition. We show\nthat the dispersion relation of the nematic mode near the phase transition can\nbe obtained from the Dirac brackets between the components of the nematic order\nparameter. The dispersion is quadratic at low momenta and has a magnetoroton\nminimum at a finite momentum, which is not related to any nearby inhomogeneous\nphase.\n",
"title": "Fractional quantum Hall systems near nematicity: bimetric theory, composite fermions, and Dirac brackets"
}
| null | null | null | null | true | null |
1238
| null |
Default
| null | null |
null |
{
"abstract": " The ability to recognize objects is an essential skill for a robotic system\nacting in human-populated environments. Despite decades of effort from the\nrobotic and vision research communities, robots are still missing good visual\nperceptual systems, preventing the use of autonomous agents for real-world\napplications. The progress is slowed down by the lack of a testbed able to\naccurately represent the world perceived by the robot in-the-wild. In order to\nfill this gap, we introduce a large-scale, multi-view object dataset collected\nwith an RGB-D camera mounted on a mobile robot. The dataset embeds the\nchallenges faced by a robot in a real-life application and provides a useful\ntool for validating object recognition algorithms. Besides describing the\ncharacteristics of the dataset, the paper evaluates the performance of a\ncollection of well-established deep convolutional networks on the new dataset\nand analyzes the transferability of deep representations from Web images to\nrobotic data. Despite the promising results obtained with such representations,\nthe experiments demonstrate that object classification with real-life robotic\ndata is far from being solved. Finally, we provide a comparative study to\nanalyze and highlight the open challenges in robot vision, explaining the\ndiscrepancies in the performance.\n",
"title": "Recognizing Objects In-the-wild: Where Do We Stand?"
}
| null | null | null | null | true | null |
1239
| null |
Default
| null | null |
null |
{
"abstract": " To convert standard Brownian motion $Z$ into a positive process, Geometric\nBrownian motion (GBM) $e^{\\beta Z_t}, \\beta >0$ is widely used. We generalize\nthis positive process by introducing an asymmetry parameter $ \\alpha \\geq 0$\nwhich describes the instantaneous volatility whenever the process reaches a new\nlow. For our new process, $\\beta$ is the instantaneous volatility as prices\nbecome arbitrarily high. Our generalization preserves the positivity, constant\nproportional drift, and tractability of GBM, while expressing the instantaneous\nvolatility as a randomly weighted $L^2$ mean of $\\alpha$ and $\\beta$. The\nrunning minimum and relative drawup of this process are also analytically\ntractable. Letting $\\alpha = \\beta$, our positive process reduces to Geometric\nBrownian motion. By adding a jump to default to the new process, we introduce a\nnon-negative martingale with the same tractabilities. Assuming a security's\ndynamics are driven by these processes in risk neutral measure, we price\nseveral derivatives including vanilla, barrier and lookback options.\n",
"title": "Generalizing Geometric Brownian Motion"
}
| null | null | null | null | true | null |
1240
| null |
Default
| null | null |
null |
{
"abstract": " General relativity's no-hair theorem states that isolated astrophysical black\nholes are described by only two numbers: mass and spin. As a consequence, there\nare strict relationships between the frequency and damping time of the\ndifferent modes of a perturbed Kerr black hole. Testing the no-hair theorem has\nbeen a longstanding goal of gravitational-wave astronomy. The recent detection\nof gravitational waves from black hole mergers would seem to make such tests\nimminent. We investigate how constraints on black hole ringdown parameters\nscale with the loudness of the ringdown signal---subject to the constraint that\nthe post-merger remnant must be allowed to settle into a perturbative,\nKerr-like state. In particular, we require that---for a given detector---the\ngravitational waveform predicted by numerical relativity is indistinguishable\nfrom an exponentially damped sine after time $t^\\text{cut}$. By requiring the\npost-merger remnant to settle into such a perturbative state, we find that\nconfidence intervals for ringdown parameters do not necessarily shrink with\nlouder signals. In at least some cases, more sensitive measurements probe later\ntimes without necessarily providing tighter constraints on ringdown frequencies\nand damping times. Preliminary investigations are unable to explain this result\nin terms of a numerical relativity artifact.\n",
"title": "Challenges testing the no-hair theorem with gravitational waves"
}
| null | null | null | null | true | null |
1241
| null |
Default
| null | null |
null |
{
"abstract": " By drawing an analogy with superfluid 4He vortices we suggest that dark\nmatter may consist of irreducibly small remnants of cosmic strings.\n",
"title": "Speculation On a Source of Dark Matter"
}
| null | null | null | null | true | null |
1242
| null |
Default
| null | null |
null |
{
"abstract": " Clouds play a significant role in the fluctuation of solar radiation received\nby the earth's surface. It is important to study the various cloud properties,\nas it impacts the total solar irradiance falling on the earth's surface. One of\nsuch important optical properties of the cloud is the Cloud Optical Thickness\n(COT). It is defined with the amount of light that can pass through the clouds.\nThe COT values are generally obtained from satellite images. However, satellite\nimages have a low temporal- and spatial- resolutions; and are not suitable for\nstudy in applications as solar energy generation and forecasting. Therefore,\nground-based sky cameras are now getting popular in such fields. In this paper,\nwe analyze the cloud optical thickness value, from the ground-based sky\ncameras, and provide future research directions.\n",
"title": "Analyzing Cloud Optical Properties Using Sky Cameras"
}
| null | null | null | null | true | null |
1243
| null |
Default
| null | null |
null |
{
"abstract": " Predicting the response of a system to perturbations is a key challenge in\nmathematical and natural sciences. Under suitable conditions on the nature of\nthe system, of the perturbation, and of the observables of interest, response\ntheories allow to construct operators describing the smooth change of the\ninvariant measure of the system of interest as a function of the small\nparameter controlling the intensity of the perturbation. In particular,\nresponse theories can be developed both for stochastic and chaotic\ndeterministic dynamical systems, where in the latter case stricter conditions\nimposing some degree of structural stability are required. In this paper we\nextend previous findings and derive general response formulae describing how\nn-point correlations are affected by perturbations to the vector flow. We also\nshow how to compute the response of the spectral properties of the system to\nperturbations. We then apply our results to the seemingly unrelated problem of\ncoarse graining in multiscale systems: we find explicit formulae describing the\nchange in the terms describing parameterisation of the neglected degrees of\nfreedom resulting from applying perturbations to the full system. All the terms\nenvisioned by the Mori-Zwanzig theory - the deterministic, stochastic, and\nnon-Markovian terms - are affected at 1st order in the perturbation. The\nobtained results provide a more comprehesive understanding of the response of\nstatistical mechanical systems to perturbations and contribute to the goal of\nconstructing accurate and robust parameterisations and are of potential\nrelevance for fields like molecular dynamics, condensed matter, and geophysical\nfluid dynamics. We envision possible applications of our general results to the\nstudy of the response of climate variability to anthropogenic and natural\nforcing and to the study of the equivalence of thermostatted statistical\nmechanical systems.\n",
"title": "Response Formulae for $n$-point Correlations in Statistical Mechanical Systems and Application to a Problem of Coarse Graining"
}
| null | null | null | null | true | null |
1244
| null |
Default
| null | null |
null |
{
"abstract": " An Electronic Health Record (EHR) is designed to store diverse data\naccurately from a range of health care providers and to capture the status of a\npatient by a range of health care providers across time. Realising the numerous\nbenefits of the system, EHR adoption is growing globally and many countries\ninvest heavily in electronic health systems. In Australia, the Government\ninvested $467 million to build key components of the Personally Controlled\nElectronic Health Record (PCEHR) system in July 2012. However, in the last\nthree years, the uptake from individuals and health care providers has not been\nsatisfactory. Unauthorised access of the PCEHR was one of the major barriers.\nWe propose an improved access control model for the PCEHR system to resolve the\nunauthorised access issue. We discuss the unauthorised access issue with real\nexamples and present a potential solution to overcome the issue to make the\nPCEHR system a success in Australia.\n",
"title": "The Australian PCEHR system: Ensuring Privacy and Security through an Improved Access Control Mechanism"
}
| null | null | null | null | true | null |
1245
| null |
Default
| null | null |
null |
{
"abstract": " The least-squares support vector machine is a frequently used kernel method\nfor non-linear regression and classification tasks. Here we discuss several\napproximation algorithms for the least-squares support vector machine\nclassifier. The proposed methods are based on randomized block kernel matrices,\nand we show that they provide good accuracy and reliable scaling for\nmulti-class classification problems with relatively large data sets. Also, we\npresent several numerical experiments that illustrate the practical\napplicability of the proposed methods.\n",
"title": "Randomized Kernel Methods for Least-Squares Support Vector Machines"
}
| null | null | null | null | true | null |
1246
| null |
Default
| null | null |
null |
{
"abstract": " We propose a simple subsampling scheme for fast randomized approximate\ncomputation of optimal transport distances. This scheme operates on a random\nsubset of the full data and can use any exact algorithm as a black-box\nback-end, including state-of-the-art solvers and entropically penalized\nversions. It is based on averaging the exact distances between empirical\nmeasures generated from independent samples from the original measures and can\neasily be tuned towards higher accuracy or shorter computation times. To this\nend, we give non-asymptotic deviation bounds for its accuracy in the case of\ndiscrete optimal transport problems. In particular, we show that in many\nimportant instances, including images (2D-histograms), the approximation error\nis independent of the size of the full problem. We present numerical\nexperiments that demonstrate that a very good approximation in typical\napplications can be obtained in a computation time that is several orders of\nmagnitude smaller than what is required for exact computation of the full\nproblem.\n",
"title": "Optimal Transport: Fast Probabilistic Approximation with Exact Solvers"
}
| null | null |
[
"Statistics"
] | null | true | null |
1247
| null |
Validated
| null | null |
null |
{
"abstract": " A Bernoulli Mixture Model (BMM) is a finite mixture of random binary vectors\nwith independent Bernoulli dimensions. The problem of clustering BMM data\narises in a variety of real-world applications, ranging from population\ngenetics to activity analysis in social networks. In this paper, we have\nanalyzed the information-theoretic PAC-learnability of BMMs, when the number of\nclusters is unknown. In particular, we stipulate certain conditions on both\nsample complexity and the dimension of the model in order to guarantee the\nProbably Approximately Correct (PAC)-clusterability of a given dataset. To the\nbest of our knowledge, these findings are the first non-asymptotic (PAC) bounds\non the sample complexity of learning BMMs.\n",
"title": "Reliable Clustering of Bernoulli Mixture Models"
}
| null | null | null | null | true | null |
1248
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we prove pointwise convergence of heat kernels for\nmGH-convergent sequences of $RCD^*(K,N)$-spaces. We obtain as a corollary\nresults on the short-time behavior of the heat kernel in $RCD^*(K,N)$-spaces.\nWe use then these results to initiate the study of Weyl's law in the $RCD$\nsetting\n",
"title": "Short-time behavior of the heat kernel and Weyl's law on $RCD^*(K, N)$-spaces"
}
| null | null | null | null | true | null |
1249
| null |
Default
| null | null |
null |
{
"abstract": " In many societies alcohol is a legal and common recreational substance and\nsocially accepted. Alcohol consumption often comes along with social events as\nit helps people to increase their sociability and to overcome their\ninhibitions. On the other hand we know that increased alcohol consumption can\nlead to serious health issues, such as cancer, cardiovascular diseases and\ndiseases of the digestive system, to mention a few. This work examines alcohol\nconsumption during the FIFA Football World Cup 2018, particularly the usage of\nalcohol related information on Twitter. For this we analyse the tweeting\nbehaviour and show that the tournament strongly increases the interest in beer.\nFurthermore we show that countries who had to leave the tournament at early\nstage might have done something good to their fans as the interest in beer\ndecreased again.\n",
"title": "Football and Beer - a Social Media Analysis on Twitter in Context of the FIFA Football World Cup 2018"
}
| null | null | null | null | true | null |
1250
| null |
Default
| null | null |
null |
{
"abstract": " The motion of a viscous deformable droplet suspended in an unbounded\nPoiseuille flow in the presence of bulk-insoluble surfactants is studied\nanalytically. Assuming the convective transport of fluid and heat to be\nnegligible, we perform a small-deformation perturbation analysis to obtain the\ndroplet migration velocity. The droplet dynamics strongly depends on the\ndistribution of surfactants along the droplet interface, which is governed by\nthe relative strength of convective transport of surfactants as compared with\nthe diffusive transport of surfactants. The present study is focused on the\nfollowing two limits: (i) when the surfactant transport is dominated by surface\ndiffusion, and (ii) when the surfactant transport is dominated by surface\nconvection. In the first limiting case, it is seen that the axial velocity of\nthe droplet decreases with increase in the advection of the surfactants along\nthe surface. The variation of cross-stream migration velocity, on the other\nhand, is analyzed over three different regimes based on the ratio of the\nviscosity of the droplet phase to that of the carrier phase. In the first\nregime the migration velocity decreases with increase in surface advection of\nthe surfactants although there is no change in direction of droplet migration.\nFor the second regime, the direction of the cross-stream migration of the\ndroplet changes depending on different parameters. In the third regime, the\nmigration velocity is merely affected by any change in the surfactant\ndistribution. For the other limit of higher surface advection in comparison to\nsurface diffusion of the surfactants, the axial velocity of the droplet is\nfound to be independent of the surfactant distribution. However, the\ncross-stream velocity is found to decrease with increase in non-uniformity in\nsurfactant distribution.\n",
"title": "Cross-stream migration of a surfactant-laden deformable droplet in a Poiseuille flow"
}
| null | null | null | null | true | null |
1251
| null |
Default
| null | null |
null |
{
"abstract": " We study Principal Component Analysis (PCA) in a setting where a part of the\ncorrupting noise is data-dependent and, as a result, the noise and the true\ndata are correlated. Under a bounded-ness assumption on the true data and the\nnoise, and a simple assumption on data-noise correlation, we obtain a nearly\noptimal sample complexity bound for the most commonly used PCA solution,\nsingular value decomposition (SVD). This bound is a significant improvement\nover the bound obtained by Vaswani and Guo in recent work (NIPS 2016) where\nthis \"correlated-PCA\" problem was first studied; and it holds under a\nsignificantly weaker data-noise correlation assumption than the one used for\nthis earlier result.\n",
"title": "PCA in Data-Dependent Noise (Correlated-PCA): Nearly Optimal Finite Sample Guarantees"
}
| null | null | null | null | true | null |
1252
| null |
Default
| null | null |
null |
{
"abstract": " The spot pricing scheme has been considered to be resource-efficient for\nproviders and cost-effective for consumers in the Cloud market. Nevertheless,\nunlike the static and straightforward strategies of trading on-demand and\nreserved Cloud services, the market-driven mechanism for trading spot service\nwould be complicated for both implementation and understanding. The largely\ninvisible market activities and their complex interactions could especially\nmake Cloud consumers hesitate to enter the spot market. To reduce the\ncomplexity in understanding the Cloud spot market, we decided to reveal the\nbackend information behind spot price variations. Inspired by the methodology\nof reverse engineering, we developed a Predator-Prey model that can simulate\nthe interactions between demand and resource based on the visible spot price\ntraces. The simulation results have shown some basic regular patterns of market\nactivities with respect to Amazon's spot instance type m3.large. Although the\nfindings of this study need further validation by using practical data, our\nwork essentially suggests a promising approach (i.e.~using a Predator-Prey\nmodel) to investigate spot market activities.\n",
"title": "Using a Predator-Prey Model to Explain Variations of Cloud Spot Price"
}
| null | null | null | null | true | null |
1253
| null |
Default
| null | null |
null |
{
"abstract": " We consider free rotation of a body whose parts move slowly with respect to\neach other under the action of internal forces. This problem can be considered\nas a perturbation of the Euler-Poinsot problem. The dynamics has an approximate\nconservation law - an adiabatic invariant. This allows to describe the\nevolution of rotation in the adiabatic approximation. The evolution leads to an\noverturn in the rotation of the body: the vector of angular velocity crosses\nthe separatrix of the Euler-Poinsot problem. This crossing leads to a\nquasi-random scattering in body's dynamics. We obtain formulas for\nprobabilities of capture into different domains in the phase space at\nseparatrix crossings.\n",
"title": "Separatrix crossing in rotation of a body with changing geometry of masses"
}
| null | null | null | null | true | null |
1254
| null |
Default
| null | null |
null |
{
"abstract": " Recent progress in deep learning for audio synthesis opens the way to models\nthat directly produce the waveform, shifting away from the traditional paradigm\nof relying on vocoders or MIDI synthesizers for speech or music generation.\nDespite their successes, current state-of-the-art neural audio synthesizers\nsuch as WaveNet and SampleRNN suffer from prohibitive training and inference\ntimes because they are based on autoregressive models that generate audio\nsamples one at a time at a rate of 16kHz. In this work, we study the more\ncomputationally efficient alternative of generating the waveform frame-by-frame\nwith large strides. We present SING, a lightweight neural audio synthesizer for\nthe original task of generating musical notes given desired instrument, pitch\nand velocity. Our model is trained end-to-end to generate notes from nearly\n1000 instruments with a single decoder, thanks to a new loss function that\nminimizes the distances between the log spectrograms of the generated and\ntarget waveforms. On the generalization task of synthesizing notes for pairs of\npitch and instrument not seen during training, SING produces audio with\nsignificantly improved perceptual quality compared to a state-of-the-art\nautoencoder based on WaveNet as measured by a Mean Opinion Score (MOS), and is\nabout 32 times faster for training and 2, 500 times faster for inference.\n",
"title": "SING: Symbol-to-Instrument Neural Generator"
}
| null | null | null | null | true | null |
1255
| null |
Default
| null | null |
null |
{
"abstract": " We prove a path-by-path regularization by noise result for scalar\nconservation laws. In particular, this proves regularizing properties for\nscalar conservation laws driven by fractional Brownian motion and generalizes\nthe respective results obtained in [Gess, Souganidis; Comm. Pure Appl. Math.\n(2017)]. In addition, we introduce a new path-by-path scaling property which is\nshown to be sufficient to imply regularizing effects.\n",
"title": "Path-by-path regularization by noise for scalar conservation laws"
}
| null | null | null | null | true | null |
1256
| null |
Default
| null | null |
null |
{
"abstract": " We present a novel end-to-end trainable neural network model for\ntask-oriented dialog systems. The model is able to track dialog state, issue\nAPI calls to knowledge base (KB), and incorporate structured KB query results\ninto system responses to successfully complete task-oriented dialogs. The\nproposed model produces well-structured system responses by jointly learning\nbelief tracking and KB result processing conditioning on the dialog history. We\nevaluate the model in a restaurant search domain using a dataset that is\nconverted from the second Dialog State Tracking Challenge (DSTC2) corpus.\nExperiment results show that the proposed model can robustly track dialog state\ngiven the dialog history. Moreover, our model demonstrates promising results in\nproducing appropriate system responses, outperforming prior end-to-end\ntrainable neural network models using per-response accuracy evaluation metrics.\n",
"title": "An End-to-End Trainable Neural Network Model with Belief Tracking for Task-Oriented Dialog"
}
| null | null | null | null | true | null |
1257
| null |
Default
| null | null |
null |
{
"abstract": " Recently, neural models for information retrieval are becoming increasingly\npopular. They provide effective approaches for product search due to their\ncompetitive advantages in semantic matching. However, it is challenging to use\ngraph-based features, though proved very useful in IR literature, in these\nneural approaches. In this paper, we leverage the recent advances in graph\nembedding techniques to enable neural retrieval models to exploit\ngraph-structured data for automatic feature extraction. The proposed approach\ncan not only help to overcome the long-tail problem of click-through data, but\nalso incorporate external heterogeneous information to improve search results.\nExtensive experiments on a real-world e-commerce dataset demonstrate\nsignificant improvement achieved by our proposed approach over multiple strong\nbaselines both as an individual retrieval model and as a feature used in\nlearning-to-rank frameworks.\n",
"title": "Neural IR Meets Graph Embedding: A Ranking Model for Product Search"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1258
| null |
Validated
| null | null |
null |
{
"abstract": " Diamond Light Source is the UK's National Synchrotron Facility and as such\nprovides access to world class experimental services for UK and international\nresearchers. As a user facility, that is one that focuses on providing a good\nuser experience to our varied visitors, Diamond invests heavily in software\ninfrastructure and staff. Over 100 members of the 600 strong workforce consider\nsoftware development as a significant tool to help them achieve their primary\nrole. These staff work on a diverse number of different software packages,\nproviding support for installation and configuration, maintenance and bug\nfixing, as well as additional research and development of software when\nrequired.\nThis talk focuses on one of the software projects undertaken to unify and\nimprove the user experience of several experiments. The \"mapping project\" is a\nlarge 2 year, multi group project targeting the collection and processing\nexperiments which involve scanning an X-ray beam over a sample and building up\nan image of that sample, similar to the way that google maps bring together\nsmall pieces of information to produce a full map of the world. The project\nitself is divided into several work packages, ranging from teams of one to 5 or\n6 in size, with varying levels of time commitment to the project. This paper\naims to explore one of these work packages as a case study, highlighting the\nexperiences of the project team, the methodologies employed, their outcomes,\nand the lessons learnt from the experience.\n",
"title": "Scaling up the software development process, a case study highlighting the complexities of large team software development"
}
| null | null | null | null | true | null |
1259
| null |
Default
| null | null |
null |
{
"abstract": " Let ${\\bf M}=(M_1,\\ldots, M_k)$ be a tuple of real $d\\times d$ matrices.\nUnder certain irreducibility assumptions, we give checkable criteria for\ndeciding whether ${\\bf M}$ possesses the following property: there exist two\nconstants $\\lambda\\in {\\Bbb R}$ and $C>0$ such that for any $n\\in {\\Bbb N}$ and\nany $i_1, \\ldots, i_n \\in \\{1,\\ldots, k\\}$, either $M_{i_1} \\cdots M_{i_n}={\\bf\n0}$ or $C^{-1} e^{\\lambda n} \\leq \\| M_{i_1} \\cdots M_{i_n} \\| \\leq C\ne^{\\lambda n}$, where $\\|\\cdot\\|$ is a matrix norm. The proof is based on\nsymbolic dynamics and the thermodynamic formalism for matrix products. As\napplications, we are able to check the absolute continuity of a class of\noverlapping self-similar measures on ${\\Bbb R}$, the absolute continuity of\ncertain self-affine measures in ${\\Bbb R}^d$ and the dimensional regularity of\na class of sofic affine-invariant sets in the plane.\n",
"title": "Lyapunov exponents for products of matrices"
}
| null | null | null | null | true | null |
1260
| null |
Default
| null | null |
null |
{
"abstract": " The main goal of the paper is the full proof of a cardinal inequality for a\nspace with points $G_\\delta $, obtained with the help of a long version of the\nMenger game. This result, which improves a similar one of Scheepers and Tall,\nwas already established by the authors under the Continuum Hypothesis. The\npaper is completed by few remarks on a long version of the tightness game.\n",
"title": "A definitive improvement of a game-theoretic bound and the long tightness game"
}
| null | null |
[
"Mathematics"
] | null | true | null |
1261
| null |
Validated
| null | null |
null |
{
"abstract": " By analyzing energy-efficient management of data centers, this paper proposes\nand develops a class of interesting {\\it Group-Server Queues}, and establishes\ntwo representative group-server queues through loss networks and impatient\ncustomers, respectively. Furthermore, such two group-server queues are given\nmodel descriptions and necessary interpretation. Also, simple mathematical\ndiscussion is provided, and simulations are made to study the expected queue\nlengths, the expected sojourn times and the expected virtual service times. In\naddition, this paper also shows that this class of group-server queues are\noften encountered in many other practical areas including communication\nnetworks, manufacturing systems, transportation networks, financial networks\nand healthcare systems. Note that the group-server queues are always used to\ndesign effectively dynamic control mechanisms through regrouping and\nrecombining such many servers in a large-scale service system by means of, for\nexample, bilateral threshold control, and customers transfer to the buffer or\nserver groups. This leads to the large-scale service system that is divided\ninto several adaptive and self-organizing subsystems through scheduling of\nbatch customers and regrouping of service resources, which make the middle\nlayer of this service system more effectively managed and strengthened under a\ndynamic, real-time and even reward optimal framework. Based on this,\nperformance of such a large-scale service system may be improved greatly in\nterms of introducing and analyzing such group-server queues. Therefore, not\nonly analysis of group-server queues is regarded as a new interesting research\ndirection, but there also exists many theoretical challenges, basic\ndifficulties and open problems in the area of queueing networks.\n",
"title": "Group-Server Queues"
}
| null | null | null | null | true | null |
1262
| null |
Default
| null | null |
null |
{
"abstract": " We analyze a rich dataset including Subaru/SuprimeCam, HST/ACS and WFC3,\nKeck/DEIMOS, Chandra/ACIS-I, and JVLA/C and D array for the merging galaxy\ncluster ZwCl 0008.8+5215. With a joint Subaru/HST weak gravitational lensing\nanalysis, we identify two dominant subclusters and estimate the masses to be\nM$_{200}=\\text{5.7}^{+\\text{2.8}}_{-\\text{1.8}}\\times\\text{10}^{\\text{14}}\\,\\text{M}_{\\odot}$\nand 1.2$^{+\\text{1.4}}_{-\\text{0.6}}\\times10^{14}$ M$_{\\odot}$. We estimate the\nprojected separation between the two subclusters to be\n924$^{+\\text{243}}_{-\\text{206}}$ kpc. We perform a clustering analysis on\nconfirmed cluster member galaxies and estimate the line of sight velocity\ndifference between the two subclusters to be 92$\\pm$164 km s$^{-\\text{1}}$. We\nfurther motivate, discuss, and analyze the merger scenario through an analysis\nof the 42 ks of Chandra/ACIS-I and JVLA/C and D polarization data. The X-ray\nsurface brightness profile reveals a remnant core reminiscent of the Bullet\nCluster. The X-ray luminosity in the 0.5-7.0 keV band is\n1.7$\\pm$0.1$\\times$10$^{\\text{44}}$ erg s$^{-\\text{1}}$ and the X-ray\ntemperature is 4.90$\\pm$0.13 keV. The radio relics are polarized up to 40$\\%$.\nWe implement a Monte Carlo dynamical analysis and estimate the merger velocity\nat pericenter to be 1800$^{+\\text{400}}_{-\\text{300}}$ km s$^{-\\text{1}}$. ZwCl\n0008.8+5215 is a low-mass version of the Bullet Cluster and therefore may prove\nuseful in testing alternative models of dark matter. We do not find significant\noffsets between dark matter and galaxies, as the uncertainties are large with\nthe current lensing data. Furthermore, in the east, the BCG is offset from\nother luminous cluster galaxies, which poses a puzzle for defining dark matter\n-- galaxy offsets.\n",
"title": "MC$^2$: Multi-wavelength and dynamical analysis of the merging galaxy cluster ZwCl 0008.8+5215: An older and less massive Bullet Cluster"
}
| null | null | null | null | true | null |
1263
| null |
Default
| null | null |
null |
{
"abstract": " Adaptive designs for multi-armed clinical trials have become increasingly\npopular recently in many areas of medical research because of their potential\nto shorten development times and to increase patient response. However,\ndeveloping response-adaptive trial designs that offer patient benefit while\nensuring the resulting trial avoids bias and provides a statistically rigorous\ncomparison of the different treatments included is highly challenging. In this\npaper, the theory of Multi-Armed Bandit Problems is used to define a family of\nnear optimal adaptive designs in the context of a clinical trial with a\nnormally distributed endpoint with known variance. Through simulation studies\nbased on an ongoing trial as a motivation we report the operating\ncharacteristics (type I error, power, bias) and patient benefit of these\napproaches and compare them to traditional and existing alternative designs.\nThese results are then compared to those recently published in the context of\nBernoulli endpoints. Many limitations and advantages are similar in both cases\nbut there are also important differences, specially with respect to type I\nerror control. This paper proposes a simulation-based testing procedure to\ncorrect for the observed type I error inflation that bandit-based and adaptive\nrules can induce. Results presented extend recent work by considering a\nnormally distributed endpoint, a very common case in clinical practice yet\nmostly ignored in the response-adaptive theoretical literature, and illustrate\nthe potential advantages of using these methods in a rare disease context. We\nalso recommend a suitable modified implementation of the bandit-based adaptive\ndesigns for the case of common diseases.\n",
"title": "Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints"
}
| null | null | null | null | true | null |
1264
| null |
Default
| null | null |
null |
{
"abstract": " Mixed-Integer Second-Order Cone Programs (MISOCPs) form a nice class of\nmixed-inter convex programs, which can be solved very efficiently due to the\nrecent advances in optimization solvers. Our paper bridges the gap between\nmodeling a class of optimization problems and using MISOCP solvers. It is shown\nhow various performance metrics of M/G/1 queues can be molded by different\nMISOCPs. To motivate our method practically, it is first applied to a\nchallenging stochastic location problem with congestion, which is broadly used\nto design socially optimal service networks. Four different MISOCPs are\ndeveloped and compared on sets of benchmark test problems. The new formulations\nefficiently solve large-size test problems, which cannot be solved by the best\nexisting method. Then, the general applicability of our method is shown for\nsimilar optimization problems that use queue-theoretic performance measures to\naddress customer satisfaction and service quality.\n",
"title": "Convexification of Queueing Formulas by Mixed-Integer Second-Order Cone Programming: An Application to a Discrete Location Problem with Congestion"
}
| null | null | null | null | true | null |
1265
| null |
Default
| null | null |
null |
{
"abstract": " A Discriminative Deep Forest (DisDF) as a metric learning algorithm is\nproposed in the paper. It is based on the Deep Forest or gcForest proposed by\nZhou and Feng and can be viewed as a gcForest modification. The case of the\nfully supervised learning is studied when the class labels of individual\ntraining examples are known. The main idea underlying the algorithm is to\nassign weights to decision trees in random forest in order to reduce distances\nbetween objects from the same class and to increase them between objects from\ndifferent classes. The weights are training parameters. A specific objective\nfunction which combines Euclidean and Manhattan distances and simplifies the\noptimization problem for training the DisDF is proposed. The numerical\nexperiments illustrate the proposed distance metric algorithm.\n",
"title": "Discriminative Metric Learning with Deep Forest"
}
| null | null | null | null | true | null |
1266
| null |
Default
| null | null |
null |
{
"abstract": " A simple robust genuinely multidimensional convective pressure split (CPS) ,\ncontact preserving, shock stable Riemann solver (GM-K-CUSP-X) for Euler\nequations of gas dynamics is developed. The convective and pressure components\nof the Euler system are separated following the Toro-Vazquez type PDE flux\nsplitting [Toro et al, 2012]. Upwind discretization of these components are\nachieved using the framework of Mandal et al [Mandal et al, 2015]. The\nrobustness of the scheme is studied on a few two dimensional test problems. The\nresults demonstrate the efficacy of the scheme over the corresponding\nconventional two state version of the solver. Results from two classic strong\nshock test cases associated with the infamous Carbuncle phenomenon, indicate\nthat the present solver is completely free of any such numerical instabilities\nalbeit possessing contact resolution abilities.Such a finding emphasizes the\npre-existing notion about the positive effects that multidimensional flow\nmodelling may have towards curing of shock instabilities.\n",
"title": "An accurate and robust genuinely multidimensional Riemann solver for Euler equations based on TV flux splitting"
}
| null | null | null | null | true | null |
1267
| null |
Default
| null | null |
null |
{
"abstract": " We prove the optimal strong convergence rate of a fully discrete scheme,\nbased on a splitting approach, for a stochastic nonlinear Schrödinger (NLS)\nequation. The main novelty of our method lies on the uniform a priori estimate\nand exponential integrability of a sequence of splitting processes which are\nused to approximate the solution of the stochastic NLS equation. We show that\nthe splitting processes converge to the solution with strong order $1/2$. Then\nwe use the Crank--Nicolson scheme to temporally discretize the splitting\nprocess and get the temporal splitting scheme which also possesses strong order\n$1/2$. To obtain a full discretization, we apply this splitting Crank--Nicolson\nscheme to the spatially discrete equation which is achieved through the\nspectral Galerkin approximation. Furthermore, we establish the convergence of\nthis fully discrete scheme with optimal strong convergence rate\n$\\mathcal{O}(N^{-2}+\\tau^\\frac12)$, where $N$ denotes the dimension of the\napproximate space and $\\tau$ denotes the time step size. To the best of our\nknowledge, this is the first result about strong convergence rates of\ntemporally numerical approximations and fully discrete schemes for stochastic\nNLS equations, or even for stochastic partial differential equations (SPDEs)\nwith non-monotone coefficients. Numerical experiments verify our theoretical\nresult.\n",
"title": "Strong Convergence Rate of Splitting Schemes for Stochastic Nonlinear Schrödinger Equations"
}
| null | null | null | null | true | null |
1268
| null |
Default
| null | null |
null |
{
"abstract": " A Boolean network is a finite state discrete time dynamical system. At each\nstep, each variable takes a value from a binary set. The value update rule for\neach variable is a local function which depends only on a selected subset of\nvariables. Boolean networks have been used in modeling gene regulatory\nnetworks. We focus in this paper on a special class of Boolean networks, namely\nthe conjunctive Boolean networks (CBNs), whose value update rule is comprised\nof only logic AND operations. It is known that any trajectory of a Boolean\nnetwork will enter a periodic orbit. Periodic orbits of a CBN have been\ncompletely understood. In this paper, we investigate the orbit-controllability\nand state-controllability of a CBN: We ask the question of how one can steer a\nCBN to enter any periodic orbit or to reach any final state, from any initial\nstate. We establish necessary and sufficient conditions for a CBN to be\norbit-controllable and state-controllable. Furthermore, explicit control laws\nare presented along the analysis.\n",
"title": "Controllability of Conjunctive Boolean Networks with Application to Gene Regulation"
}
| null | null | null | null | true | null |
1269
| null |
Default
| null | null |
null |
{
"abstract": " Gamma-ray and fast-neutron imaging was performed with a novel liquid xenon\n(LXe) scintillation detector read out by a Gaseous Photomultiplier (GPM). The\n100 mm diameter detector prototype comprised a capillary-filled LXe\nconverter/scintillator, coupled to a triple-THGEM imaging-GPM, with its first\nelectrode coated by a CsI UV-photocathode, operated in Ne/5%CH4 cryogenic\ntemperatures. Radiation localization in 2D was derived from\nscintillation-induced photoelectron avalanches, measured on the GPM's segmented\nanode. The localization properties of Co-60 gamma-rays and a mixed\nfast-neutron/gamma-ray field from an AmBe neutron source were derived from\nirradiation of a Pb edge absorber. Spatial resolutions of 12+/-2 mm and 10+/-2\nmm (FWHM) were reached with Co-60 and AmBe sources, respectively. The\nexperimental results are in good agreement with GEANT4 simulations. The\ncalculated ultimate expected resolutions for our application-relevant 4.4 and\n15.1 MeV gamma-rays and 1-15 MeV neutrons are 2-4 mm and ~2 mm (FWHM),\nrespectively. These results indicate the potential applicability of the new\ndetector concept to Fast-Neutron Resonance Radiography (FNRR) and\nDual-Discrete-Energy Gamma Radiography (DDEGR) of large objects.\n",
"title": "Fast-neutron and gamma-ray imaging with a capillary liquid xenon converter coupled to a gaseous photomultiplier"
}
| null | null | null | null | true | null |
1270
| null |
Default
| null | null |
null |
{
"abstract": " The task board is an essential artifact in many agile development approaches.\nIt provides a good overview of the project status. Teams often customize their\ntask boards according to the team members' needs. They modify the structure of\nboards, define colored codings for different purposes, and introduce different\ncard sizes. Although the customizations are intended to improve the task\nboard's usability and effectiveness, they may also complicate its comprehension\nand use. The increased effort impedes the work of both the team and team\nexternals. Hence, task board customization is in conflict with the agile\npractice of fast and easy overview for everyone. In an eye tracking study with\n30 participants, we compared an original task board design with three\ncustomized ones to investigate which design shortened the required time to\nidentify a particular story card. Our findings yield that only the customized\ntask board design with modified structures reduces the required time. The\noriginal task board design is more beneficial than individual colored codings\nand changed card sizes. According to our findings, agile teams should rethink\ntheir current task board design. They may be better served by focusing on the\noriginal task board design and by applying only carefully selected adjustments.\nIn case of customization, a task board's structure should be adjusted since\nthis is the only beneficial kind of customization, that additionally complies\nmore precisely with the concept of fast and easy project overview.\n",
"title": "Is Task Board Customization Beneficial? - An Eye Tracking Study"
}
| null | null | null | null | true | null |
1271
| null |
Default
| null | null |
null |
{
"abstract": " Hydrogeologic models are commonly over-smoothed relative to reality, owing to\nthe difficulty of obtaining accurate high-resolution information about the\nsubsurface. When used in an inversion context, such models may introduce\nsystematic biases which cannot be encapsulated by an unbiased \"observation\nnoise\" term of the type assumed by standard regularization theory and typical\nBayesian formulations. Despite its importance, model error is difficult to\nencapsulate systematically and is often neglected. Here, model error is\nconsidered for a hydrogeologically important class of inverse problems that\nincludes interpretation of hydraulic transients and contaminant source history\ninference: reconstruction of a time series that has been convolved against a\ntransfer function (i.e., impulse response) that is only approximately known.\nUsing established harmonic theory along with two results established here\nregarding triangular Toeplitz matrices, upper and lower error bounds are\nderived for the effect of systematic model error on time series recovery for\nboth well-determined and over-determined inverse problems. A Monte Carlo study\nof a realistic hydraulic reconstruction problem is presented, and the lower\nerror bound is seen informative about expected behavior. A possible diagnostic\ncriterion for blind transfer function characterization is also uncovered.\n",
"title": "Characterizing the impact of model error in hydrogeologic time series recovery inverse problems"
}
| null | null |
[
"Mathematics"
] | null | true | null |
1272
| null |
Validated
| null | null |
null |
{
"abstract": " Suszko's problem is the problem of finding the minimal number of truth values\nneeded to semantically characterize a syntactic consequence relation. Suszko\nproved that every Tarskian consequence relation can be characterized using only\ntwo truth values. Malinowski showed that this number can equal three if some of\nTarski's structural constraints are relaxed. By so doing, Malinowski introduced\na case of so-called mixed consequence, allowing the notion of a designated\nvalue to vary between the premises and the conclusions of an argument. In this\npaper we give a more systematic perspective on Suszko's problem and on mixed\nconsequence. First, we prove general representation theorems relating\nstructural properties of a consequence relation to their semantic\ninterpretation, uncovering the semantic counterpart of substitution-invariance,\nand establishing that (intersective) mixed consequence is fundamentally the\nsemantic counterpart of the structural property of monotonicity. We use those\nto derive maximum-rank results proved recently in a different setting by French\nand Ripley, as well as by Blasio, Marcos and Wansing, for logics with various\nstructural properties (reflexivity, transitivity, none, or both). We strengthen\nthese results into exact rank results for non-permeable logics (roughly, those\nwhich distinguish the role of premises and conclusions). We discuss the\nunderlying notion of rank, and the associated reduction proposed independently\nby Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve\ncompositionality in general, meaning that the resulting semantics is no longer\ntruth-functional. We propose a modification of that notion of reduction,\nallowing us to prove that over compact logics with what we call regular\nconnectives, rank results are maintained even if we request the preservation of\ntruth-functionality and additional semantic properties.\n",
"title": "Suszko's Problem: Mixed Consequence and Compositionality"
}
| null | null | null | null | true | null |
1273
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we introduce a new classification algorithm called Optimization\nof Distributions Differences (ODD). The algorithm aims to find a transformation\nfrom the feature space to a new space where the instances in the same class are\nas close as possible to one another while the gravity centers of these classes\nare as far as possible from one another. This aim is formulated as a\nmultiobjective optimization problem that is solved by a hybrid of an\nevolutionary strategy and the Quasi-Newton method. The choice of the\ntransformation function is flexible and could be any continuous space function.\nWe experiment with a linear and a non-linear transformation in this paper. We\nshow that the algorithm can outperform 6 other state-of-the-art classification\nmethods, namely naive Bayes, support vector machines, linear discriminant\nanalysis, multi-layer perceptrons, decision trees, and k-nearest neighbors, in\n12 standard classification datasets. Our results show that the method is less\nsensitive to the imbalanced number of instances comparing to these methods. We\nalso show that ODD maintains its performance better than other classification\nmethods in these datasets, hence, offers a better generalization ability.\n",
"title": "Optimization of distributions differences for classification"
}
| null | null | null | null | true | null |
1274
| null |
Default
| null | null |
null |
{
"abstract": " We study the Galois descent of semi-affinoid non-archimedean analytic spaces.\nThese are the non-archimedean analytic spaces which admit an affine special\nformal scheme as model over a complete discrete valuation ring, such as for\nexample open or closed polydiscs or polyannuli. Using Weil restrictions and\nGalois fixed loci for semi-affinoid spaces and their formal models, we describe\na formal model of a $K$-analytic space $X$, provided that $X\\otimes_KL$ is\nsemi-affinoid for some finite tamely ramified extension $L$ of $K$. As an\napplication, we study the forms of analytic annuli that are trivialized by a\nwide class of Galois extensions that includes totally tamely ramified\nextensions. In order to do so, we first establish a Weierstrass preparation\nresult for analytic functions on annuli, and use it to linearize finite order\nautomorphisms of annuli. Finally, we explain how from these results one can\ndeduce a non-archimedean analytic proof of the existence of resolutions of\nsingularities of surfaces in characteristic zero.\n",
"title": "Galois descent of semi-affinoid spaces"
}
| null | null | null | null | true | null |
1275
| null |
Default
| null | null |
null |
{
"abstract": " We have synthesized a new layered oxychalcogenide La2O2Bi3AgS6. From\nsynchrotron X-ray diffraction and Rietveld refinement, the crystal structure of\nLa2O2Bi3AgS6 was refined using a model of the P4/nmm space group with a =\n4.0644(1) {\\AA} and c = 19.412(1) {\\AA}, which is similar to the related\ncompound LaOBiPbS3, while the interlayer bonds (M2-S1 bonds) are apparently\nshorter in La2O2Bi3AgS6. The tunneling electron microscopy (TEM) image\nconfirmed the lattice constant derived from Rietveld refinement (c ~ 20 {\\AA}).\nThe electrical resistivity and Seebeck coefficient suggested that the\nelectronic states of La2O2Bi3AgS6 are more metallic than those of LaOBiS2 and\nLaOBiPbS3. The insertion of a rock-salt-type chalcogenide into the van der\nWaals gap of BiS2-based layered compounds, such as LaOBiS2, will be a useful\nstrategy for designing new layered functional materials in the layered\nchalcogenide family.\n",
"title": "Synthesis, Crystal Structure, and Physical Properties of New Layered Oxychalcogenide La2O2Bi3AgS6"
}
| null | null | null | null | true | null |
1276
| null |
Default
| null | null |
null |
{
"abstract": " Is perfect matching in NC? That is, is there a deterministic fast parallel\nalgorithm for it? This has been an outstanding open question in theoretical\ncomputer science for over three decades, ever since the discovery of RNC\nmatching algorithms. Within this question, the case of planar graphs has\nremained an enigma: On the one hand, counting the number of perfect matchings\nis far harder than finding one (the former is #P-complete and the latter is in\nP), and on the other, for planar graphs, counting has long been known to be in\nNC whereas finding one has resisted a solution.\nIn this paper, we give an NC algorithm for finding a perfect matching in a\nplanar graph. Our algorithm uses the above-stated fact about counting matchings\nin a crucial way. Our main new idea is an NC algorithm for finding a face of\nthe perfect matching polytope at which $\\Omega(n)$ new conditions, involving\nconstraints of the polytope, are simultaneously satisfied. Several other ideas\nare also needed, such as finding a point in the interior of the minimum weight\nface of this polytope and finding a balanced tight odd set in NC.\n",
"title": "Planar Graph Perfect Matching is in NC"
}
| null | null |
[
"Computer Science"
] | null | true | null |
1277
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we investigate the common scenario where every candidate item\nfor recommendation is characterized by a maximum capacity, i.e., number of\nseats in a Point-of-Interest (POI) or size of an item's inventory. Despite the\nprevalence of the task of recommending items under capacity constraints in a\nvariety of settings, to the best of our knowledge, none of the known\nrecommender methods is designed to respect capacity constraints. To close this\ngap, we extend three state-of-the art latent factor recommendation approaches:\nprobabilistic matrix factorization (PMF), geographical matrix factorization\n(GeoMF), and bayesian personalized ranking (BPR), to optimize for both\nrecommendation accuracy and expected item usage that respects the capacity\nconstraints. We introduce the useful concepts of user propensity to listen and\nitem capacity. Our experimental results in real-world datasets, both for the\ndomain of item recommendation and POI recommendation, highlight the benefit of\nour method for the setting of recommendation under capacity constraints.\n",
"title": "Recommendation under Capacity Constraints"
}
| null | null | null | null | true | null |
1278
| null |
Default
| null | null |
null |
{
"abstract": " Using deep reinforcement learning, we train control policies for autonomous\nvehicles leading a platoon of vehicles onto a roundabout. Using Flow, a library\nfor deep reinforcement learning in micro-simulators, we train two policies, one\npolicy with noise injected into the state and action space and one without any\ninjected noise. In simulation, the autonomous vehicle learns an emergent\nmetering behavior for both policies in which it slows to allow for smoother\nmerging. We then directly transfer this policy without any tuning to the\nUniversity of Delaware Scaled Smart City (UDSSC), a 1:25 scale testbed for\nconnected and automated vehicles. We characterize the performance of both\npolicies on the scaled city. We show that the noise-free policy winds up\ncrashing and only occasionally metering. However, the noise-injected policy\nconsistently performs the metering behavior and remains collision-free,\nsuggesting that the noise helps with the zero-shot policy transfer.\nAdditionally, the transferred, noise-injected policy leads to a 5% reduction of\naverage travel time and a reduction of 22% in maximum travel time in the UDSSC.\nVideos of the controllers can be found at\nthis https URL.\n",
"title": "Simulation to scaled city: zero-shot policy transfer for traffic control via autonomous vehicles"
}
| null | null | null | null | true | null |
1279
| null |
Default
| null | null |
null |
{
"abstract": " We study the following generalization of singularity categories. Let X be a\nquasi-projective Gorenstein scheme with isolated singularities and A a\nnon-commutative resolution of singularities of X in the sense of Van den Bergh.\nWe introduce the relative singularity category as the Verdier quotient of the\nbounded derived category of coherent sheaves on A modulo the category of\nperfect complexes on X. We view it as a measure for the difference between X\nand A. The main results of this thesis are the following.\n(i) We prove an analogue of Orlov's localization result in our setup. If X\nhas isolated singularities, then this reduces the study of the relative\nsingularity categories to the affine case.\n(ii) We prove Hom-finiteness and idempotent completeness of the relative\nsingularity categories in the complete local situation and determine its\nGrothendieck group.\n(iii) We give a complete and explicit description of the relative singularity\ncategories when X has only nodal singularities and the resolution is given by a\nsheaf of Auslander algebras.\n(iv) We study relations between relative singularity categories and classical\nsingularity categories. For a simple hypersurface singularity and its Auslander\nresolution, we show that these categories determine each other.\n(v) The developed technique leads to the following `purely commutative'\napplication: a description of Iyama & Wemyss triangulated category for rational\nsurface singularities in terms of the singularity category of the rational\ndouble point resolution.\n(vi) We give a description of singularity categories of gentle algebras.\n",
"title": "Relative Singularity Categories"
}
| null | null | null | null | true | null |
1280
| null |
Default
| null | null |
null |
{
"abstract": " It is undeniable that the worldwide computer industry's center is the US,\nspecifically in Silicon Valley. Much of the reason for the success of Silicon\nValley had to do with Moore's Law: the observation by Intel co-founder Gordon\nMoore that the number of transistors on a microchip doubled at a rate of\napproximately every two years. According to the International Technology\nRoadmap for Semiconductors, Moore's Law will end in 2021. How can we rethink\ncomputing technology to restart the historic explosive performance growth?\nSince 2012, the IEEE Rebooting Computing Initiative (IEEE RCI) has been working\nwith industry and the US government to find new computing approaches to answer\nthis question. In parallel, the CCC has held a number of workshops addressing\nsimilar questions. This whitepaper summarizes some of the IEEE RCI and CCC\nfindings. The challenge for the US is to lead this new era of computing. Our\ninternational competitors are not sitting still: China has invested\nsignificantly in a variety of approaches such as neuromorphic computing, chip\nfabrication facilities, computer architecture, and high-performance simulation\nand data analytics computing, for example. We must act now, otherwise, the\ncenter of the computer industry will move from Silicon Valley and likely move\noff shore entirely.\n",
"title": "Challenges to Keeping the Computer Industry Centered in the US"
}
| null | null | null | null | true | null |
1281
| null |
Default
| null | null |
null |
{
"abstract": " Using contiguous relations we construct an infinite number of continued\nfraction expansions for ratios of generalized hypergeometric series 3F2(1). We\nestablish exact error term estimates for their approximants and prove their\nrapid convergences. To do so we develop a discrete version of Laplace's method\nfor hypergeometric series in addition to the use of ordinary (continuous)\nLaplace's method for Euler's hypergeometric integrals.\n",
"title": "Contiguous Relations, Laplace's Methods and Continued Fractions for 3F2(1)"
}
| null | null |
[
"Mathematics"
] | null | true | null |
1282
| null |
Validated
| null | null |
null |
{
"abstract": " This paper gives upper and lower bounds on the minimum error probability of\nBayesian $M$-ary hypothesis testing in terms of the Arimoto-Rényi conditional\nentropy of an arbitrary order $\\alpha$. The improved tightness of these bounds\nover their specialized versions with the Shannon conditional entropy\n($\\alpha=1$) is demonstrated. In particular, in the case where $M$ is finite,\nwe show how to generalize Fano's inequality under both the conventional and\nlist-decision settings. As a counterpart to the generalized Fano's inequality,\nallowing $M$ to be infinite, a lower bound on the Arimoto-Rényi conditional\nentropy is derived as a function of the minimum error probability. Explicit\nupper and lower bounds on the minimum error probability are obtained as a\nfunction of the Arimoto-Rényi conditional entropy for both positive and\nnegative $\\alpha$. Furthermore, we give upper bounds on the minimum error\nprobability as functions of the Rényi divergence. In the setup of discrete\nmemoryless channels, we analyze the exponentially vanishing decay of the\nArimoto-Rényi conditional entropy of the transmitted codeword given the\nchannel output when averaged over a random coding ensemble.\n",
"title": "Arimoto-Rényi Conditional Entropy and Bayesian $M$-ary Hypothesis Testing"
}
| null | null | null | null | true | null |
1283
| null |
Default
| null | null |
null |
{
"abstract": " With Bell's inequalities one has a formal expression to show how essentially\nall local theories of natural phenomena that are formulated within the\nframework of realism may be tested using a simple experimental arrangement. For\nthe case of entangled pairs of spin-1/2 particles we propose an alternative\nmeasurement setup which is consistent to the necessary assumptions\ncorresponding to the derivation of the Bell inequalities. We find that the Bell\ninequalities are never violated with respect to our suggested measurement\nprocess.\n",
"title": "A note on the violation of Bell's inequality"
}
| null | null | null | null | true | null |
1284
| null |
Default
| null | null |
null |
{
"abstract": " Improving endurance is crucial for extending the spatial and temporal\noperation range of autonomous underwater vehicles (AUVs). Considering the\nhardware constraints and the performance requirements, an intelligent energy\nmanagement system is required to extend the operation range of AUVs. This paper\npresents a novel model predictive control (MPC) framework for energy-optimal\npoint-to-point motion control of an AUV. In this scheme, the energy management\nproblem of an AUV is reformulated as a surge motion optimization problem in two\nstages. First, a system-level energy minimization problem is solved by managing\nthe trade-off between the energies required for overcoming the positive\nbuoyancy and surge drag force in static optimization. Next, an MPC with a\nspecial cost function formulation is proposed to deal with transients and\nsystem dynamics. A switching logic for handling the transition between the\nstatic and dynamic stages is incorporated to reduce the computational efforts.\nSimulation results show that the proposed method is able to achieve\nnear-optimal energy consumption with considerable lower computational\ncomplexity.\n",
"title": "Real-Time Model Predictive Control for Energy Management in Autonomous Underwater Vehicle"
}
| null | null | null | null | true | null |
1285
| null |
Default
| null | null |
null |
{
"abstract": " The Surjective H-Colouring problem is to test if a given graph allows a\nvertex-surjective homomorphism to a fixed graph H. The complexity of this\nproblem has been well studied for undirected (partially) reflexive graphs. We\nintroduce endo-triviality, the property of a structure that all of its\nendomorphisms that do not have range of size 1 are automorphisms, as a means to\nobtain complexity-theoretic classifications of Surjective H-Colouring in the\ncase of reflexive digraphs.\nChen [2014] proved, in the setting of constraint satisfaction problems, that\nSurjective H-Colouring is NP-complete if H has the property that all of its\npolymorphisms are essentially unary. We give the first concrete application of\nhis result by showing that every endo-trivial reflexive digraph H has this\nproperty. We then use the concept of endo-triviality to prove, as our main\nresult, a dichotomy for Surjective H-Colouring when H is a reflexive\ntournament: if H is transitive, then Surjective H-Colouring is in NL, otherwise\nit is NP-complete.\nBy combining this result with some known and new results we obtain a\ncomplexity classification for Surjective H-Colouring when H is a partially\nreflexive digraph of size at most 3.\n",
"title": "Surjective H-Colouring over Reflexive Digraphs"
}
| null | null | null | null | true | null |
1286
| null |
Default
| null | null |
null |
{
"abstract": " This paper proposes a modal typing system that enables us to handle\nself-referential formulae, including ones with negative self-references, which\non one hand, would introduce a logical contradiction, namely Russell's paradox,\nin the conventional setting, while on the other hand, are necessary to capture\na certain class of programs such as fixed-point combinators and objects with\nso-called binary methods in object-oriented programming. The proposed system\nprovides a basis for axiomatic semantics of such a wider range of programs and\na new framework for natural construction of recursive programs in the\nproofs-as-programs paradigm.\n",
"title": "A modal typing system for self-referential programs and specifications"
}
| null | null | null | null | true | null |
1287
| null |
Default
| null | null |
null |
{
"abstract": " Recommender System research suffers currently from a disconnect between the\nsize of academic data sets and the scale of industrial production systems. In\norder to bridge that gap we propose to generate more massive user/item\ninteraction data sets by expanding pre-existing public data sets. User/item\nincidence matrices record interactions between users and items on a given\nplatform as a large sparse matrix whose rows correspond to users and whose\ncolumns correspond to items. Our technique expands such matrices to larger\nnumbers of rows (users), columns (items) and non zero values (interactions)\nwhile preserving key higher order statistical properties. We adapt the\nKronecker Graph Theory to user/item incidence matrices and show that the\ncorresponding fractal expansions preserve the fat-tailed distributions of user\nengagements, item popularity and singular value spectra of user/item\ninteraction matrices. Preserving such properties is key to building large\nrealistic synthetic data sets which in turn can be employed reliably to\nbenchmark Recommender Systems and the systems employed to train them. We\nprovide algorithms to produce such expansions and apply them to the MovieLens\n20 million data set comprising 20 million ratings of 27K movies by 138K users.\nThe resulting expanded data set has 10 billion ratings, 2 million items and\n864K users in its smaller version and can be scaled up or down. A larger\nversion features 655 billion ratings, 7 million items and 17 million users.\n",
"title": "Scalable Realistic Recommendation Datasets through Fractal Expansions"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
1288
| null |
Validated
| null | null |
null |
{
"abstract": " This paper considers the problem of implementing a previously proposed\ndistributed direct coupling quantum observer for a closed linear quantum\nsystem. By modifying the form of the previously proposed observer, the paper\nproposes a possible experimental implementation of the observer plant system\nusing a non-degenerate parametric amplifier and a chain of optical cavities\nwhich are coupled together via optical interconnections. It is shown that the\ndistributed observer converges to a consensus in a time averaged sense in which\nan output of each element of the observer estimates the specified output of the\nquantum plant.\n",
"title": "Implementation of a Distributed Coherent Quantum Observer"
}
| null | null | null | null | true | null |
1289
| null |
Default
| null | null |
null |
{
"abstract": " Stacking is a general approach for combining multiple models toward greater\npredictive accuracy. It has found various application across different domains,\nensuing from its meta-learning nature. Our understanding, nevertheless, on how\nand why stacking works remains intuitive and lacking in theoretical insight. In\nthis paper, we use the stability of learning algorithms as an elemental\nanalysis framework suitable for addressing the issue. To this end, we analyze\nthe hypothesis stability of stacking, bag-stacking, and dag-stacking and\nestablish a connection between bag-stacking and weighted bagging. We show that\nthe hypothesis stability of stacking is a product of the hypothesis stability\nof each of the base models and the combiner. Moreover, in bag-stacking and\ndag-stacking, the hypothesis stability depends on the sampling strategy used to\ngenerate the training set replicates. Our findings suggest that 1) subsampling\nand bootstrap sampling improve the stability of stacking, and 2) stacking\nimproves the stability of both subbagging and bagging.\n",
"title": "Stacking and stability"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
1290
| null |
Validated
| null | null |
null |
{
"abstract": " The two-dimensional discrete wavelet transform has a huge number of\napplications in image-processing techniques. Until now, several papers compared\nthe performance of such transform on graphics processing units (GPUs). However,\nall of them only dealt with lifting and convolution computation schemes. In\nthis paper, we show that corresponding horizontal and vertical lifting parts of\nthe lifting scheme can be merged into non-separable lifting units, which halves\nthe number of steps. We also discuss an optimization strategy leading to a\nreduction in the number of arithmetic operations. The schemes were assessed\nusing the OpenCL and pixel shaders. The proposed non-separable lifting scheme\noutperforms the existing schemes in many cases, irrespective of its higher\ncomplexity.\n",
"title": "Accelerating Discrete Wavelet Transforms on GPUs"
}
| null | null | null | null | true | null |
1291
| null |
Default
| null | null |
null |
{
"abstract": " We apply the Min-Sum message-passing protocol to solve the consensus problem\nin distributed optimization. We show that while the ordinary Min-Sum algorithm\ndoes not converge, a modified version of it known as Splitting yields\nconvergence to the problem solution. We prove that a proper choice of the\ntuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated\nconvergence rates, matching the rates obtained by shift-register methods. The\nacceleration scheme embodied by Min-Sum Splitting for the consensus problem\nbears similarities with lifted Markov chains techniques and with multi-step\nfirst order methods in convex optimization.\n",
"title": "Accelerated Consensus via Min-Sum Splitting"
}
| null | null | null | null | true | null |
1292
| null |
Default
| null | null |
null |
{
"abstract": " In the last few years, we have seen the transformative impact of deep\nlearning in many applications, particularly in speech recognition and computer\nvision. Inspired by Google's Inception-ResNet deep convolutional neural network\n(CNN) for image classification, we have developed \"Chemception\", a deep CNN for\nthe prediction of chemical properties, using just the images of 2D drawings of\nmolecules. We develop Chemception without providing any additional explicit\nchemistry knowledge, such as basic concepts like periodicity, or advanced\nfeatures like molecular descriptors and fingerprints. We then show how\nChemception can serve as a general-purpose neural network architecture for\npredicting toxicity, activity, and solvation properties when trained on a\nmodest database of 600 to 40,000 compounds. When compared to multi-layer\nperceptron (MLP) deep neural networks trained with ECFP fingerprints,\nChemception slightly outperforms in activity and solvation prediction and\nslightly underperforms in toxicity prediction. Having matched the performance\nof expert-developed QSAR/QSPR deep learning models, our work demonstrates the\nplausibility of using deep neural networks to assist in computational chemistry\nresearch, where the feature engineering process is performed primarily by a\ndeep learning algorithm.\n",
"title": "Chemception: A Deep Neural Network with Minimal Chemistry Knowledge Matches the Performance of Expert-developed QSAR/QSPR Models"
}
| null | null | null | null | true | null |
1293
| null |
Default
| null | null |
null |
{
"abstract": " The aim of Galactic Archaeology is to recover the evolutionary history of the\nMilky Way from its present day kinematical and chemical state. Because stars\nmove away from their birth sites, the current dynamical information alone is\nnot sufficient for this task. The chemical composition of stellar atmospheres,\non the other hand, is largely preserved over the stellar lifetime and, together\nwith accurate ages, can be used to recover the birthplaces of stars currently\nfound at the same Galactic radius. In addition to the availability of large\nstellar samples with accurate 6D kinematics and chemical abundance\nmeasurements, this requires detailed modeling with both dynamical and chemical\nevolution taken into account. An important first step is to understand the\nvariety of dynamical processes that can take place in the Milky Way, including\nthe perturbative effects of both internal (bar and spiral structure) and\nexternal (infalling satellites) agents. We discuss here (1) how to constrain\nthe Galactic bar, spiral structure, and merging satellites by their effect on\nthe local and global disc phase-space, (2) the effect of multiple patterns on\nthe disc dynamics, and (3) the importance of radial migration and merger\nperturbations for the formation of the Galactic thick disc. Finally, we discuss\nthe construction of Milky Way chemo-dynamical models and relate to\nobservations.\n",
"title": "Constraining the Milky Way assembly history with Galactic Archaeology. Ludwig Biermann Award Lecture 2015"
}
| null | null | null | null | true | null |
1294
| null |
Default
| null | null |
null |
{
"abstract": " We propose a general framework for entropy-regularized average-reward\nreinforcement learning in Markov decision processes (MDPs). Our approach is\nbased on extending the linear-programming formulation of policy optimization in\nMDPs to accommodate convex regularization functions. Our key result is showing\nthat using the conditional entropy of the joint state-action distributions as\nregularization yields a dual optimization problem closely resembling the\nBellman optimality equations. This result enables us to formalize a number of\nstate-of-the-art entropy-regularized reinforcement learning algorithms as\napproximate variants of Mirror Descent or Dual Averaging, and thus to argue\nabout the convergence properties of these methods. In particular, we show that\nthe exact version of the TRPO algorithm of Schulman et al. (2015) actually\nconverges to the optimal policy, while the entropy-regularized policy gradient\nmethods of Mnih et al. (2016) may fail to converge to a fixed point. Finally,\nwe illustrate empirically the effects of using various regularization\ntechniques on learning performance in a simple reinforcement learning setup.\n",
"title": "A unified view of entropy-regularized Markov decision processes"
}
| null | null | null | null | true | null |
1295
| null |
Default
| null | null |
null |
{
"abstract": " We present some basic integer arithmetic quantum circuits, such as adders and\nmultipliers-accumulators of various forms, as well as diagonal operators, which\noperate on multilevel qudits. The integers to be processed are represented in\nan alternative basis after they have been Fourier transformed. Several\narithmetic circuits operating on Fourier transformed integers have appeared in\nthe literature for two level qubits. Here we extend these techniques on\nmultilevel qudits, as they may offer some advantages relative to qubits\nimplementations. The arithmetic circuits presented can be used as basic\nbuilding blocks for higher level algorithms such as quantum phase estimation,\nquantum simulation, quantum optimization etc., but they can also be used in the\nimplementation of a quantum fractional Fourier transform as it is shown in a\ncompanion work presented separately.\n",
"title": "Arithmetic Circuits for Multilevel Qudits Based on Quantum Fourier Transform"
}
| null | null | null | null | true | null |
1296
| null |
Default
| null | null |
null |
{
"abstract": " We analyzed the longitudinal activity of nearly 7,000 editors at the\nmega-journal PLOS ONE over the 10-year period 2006-2015. Using the\narticle-editor associations, we develop editor-specific measures of power,\nactivity, article acceptance time, citation impact, and editorial renumeration\n(an analogue to self-citation). We observe remarkably high levels of power\ninequality among the PLOS ONE editors, with the top-10 editors responsible for\n3,366 articles -- corresponding to 2.4% of the 141,986 articles we analyzed.\nSuch high inequality levels suggest the presence of unintended incentives,\nwhich may reinforce unethical behavior in the form of decision-level biases at\nthe editorial level. Our results indicate that editors may become apathetic in\njudging the quality of articles and susceptible to modes of power-driven\nmisconduct. We used the longitudinal dimension of editor activity to develop\ntwo panel regression models which test and verify the presence of editor-level\nbias. In the first model we analyzed the citation impact of articles, and in\nthe second model we modeled the decision time between an article being\nsubmitted and ultimately accepted by the editor. We focused on two variables\nthat represent social factors that capture potential conflicts-of-interest: (i)\nwe accounted for the social ties between editors and authors by developing a\nmeasure of repeat authorship among an editor's article set, and (ii) we\naccounted for the rate of citations directed towards the editor's own\npublications in the reference list of each article he/she oversaw. Our results\nindicate that these two factors play a significant role in the editorial\ndecision process. Moreover, these two effects appear to increase with editor\nage, which is consistent with behavioral studies concerning the evolution of\nmisbehavior and response to temptation in power-driven environments.\n",
"title": "Quantifying the distribution of editorial power and manuscript decision bias at the mega-journal PLOS ONE"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
1297
| null |
Validated
| null | null |
null |
{
"abstract": " The beyond worst-case synthesis problem was introduced recently by Bruyère\net al. [BFRR14]: it aims at building system controllers that provide strict\nworst-case performance guarantees against an antagonistic environment while\nensuring higher expected performance against a stochastic model of the\nenvironment. Our work extends the framework of [BFRR14] and follow-up papers,\nwhich focused on quantitative objectives, by addressing the case of\n$\\omega$-regular conditions encoded as parity objectives, a natural way to\nrepresent functional requirements of systems.\nWe build strategies that satisfy a main parity objective on all plays, while\nensuring a secondary one with sufficient probability. This setting raises new\nchallenges in comparison to quantitative objectives, as one cannot easily mix\ndifferent strategies without endangering the functional properties of the\nsystem. We establish that, for all variants of this problem, deciding the\nexistence of a strategy lies in ${\\sf NP} \\cap {\\sf coNP}$, the same complexity\nclass as classical parity games. Hence, our framework provides additional\nmodeling power while staying in the same complexity class.\n[BFRR14] Véronique Bruyère, Emmanuel Filiot, Mickael Randour, and\nJean-François Raskin. Meet your expectations with guarantees: Beyond\nworst-case synthesis in quantitative games. In Ernst W. Mayr and Natacha\nPortier, editors, 31st International Symposium on Theoretical Aspects of\nComputer Science, STACS 2014, March 5-8, 2014, Lyon, France, volume 25 of\nLIPIcs, pages 199-213. Schloss Dagstuhl - Leibniz - Zentrum fuer Informatik,\n2014.\n",
"title": "Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes"
}
| null | null | null | null | true | null |
1298
| null |
Default
| null | null |
null |
{
"abstract": " VAEs (Variational AutoEncoders) have proved to be powerful in the context of\ndensity modeling and have been used in a variety of contexts for creative\npurposes. In many settings, the data we model possesses continuous attributes\nthat we would like to take into account at generation time. We propose in this\npaper GLSR-VAE, a Geodesic Latent Space Regularization for the Variational\nAutoEncoder architecture and its generalizations which allows a fine control on\nthe embedding of the data into the latent space. When augmenting the VAE loss\nwith this regularization, changes in the learned latent space reflects changes\nof the attributes of the data. This deeper understanding of the VAE latent\nspace structure offers the possibility to modulate the attributes of the\ngenerated data in a continuous way. We demonstrate its efficiency on a\nmonophonic music generation task where we manage to generate variations of\ndiscrete sequences in an intended and playful way.\n",
"title": "GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures"
}
| null | null | null | null | true | null |
1299
| null |
Default
| null | null |
null |
{
"abstract": " Rotating radio transients (RRATs), loosely defined as objects that are\ndiscovered through only their single pulses, are sporadic pulsars that have a\nwide range of emission properties. For many of them, we must measure their\nperiods and determine timing solutions relying on the timing of their\nindividual pulses, while some of the less sporadic RRATs can be timed by using\nfolding techniques as we do for other pulsars. Here, based on Parkes and Green\nBank Telescope (GBT) observations, we introduce our results on eight RRATs\nincluding their timing-derived rotation parameters, positions, and dispersion\nmeasures (DMs), along with a comparison of the spin-down properties of RRATs\nand normal pulsars. Using data for 24 RRATs, we find that their period\nderivatives are generally larger than those of normal pulsars, independent of\nany intrinsic correlation with period, indicating that RRATs' highly sporadic\nemission may be associated with intrinsically larger magnetic fields. We carry\nout Lomb$-$Scargle tests to search for periodicities in RRATs' pulse detection\ntimes with long timescales. Periodicities are detected for all targets, with\nsignificant candidates of roughly 3.4 hr for PSR J1623$-$0841 and 0.7 hr for\nPSR J1839$-$0141. We also analyze their single-pulse amplitude distributions,\nfinding that log-normal distributions provide the best fits, as is the case for\nmost pulsars. However, several RRATs exhibit power-law tails, as seen for\npulsars emitting giant pulses. This, along with consideration of the selection\neffects against the detection of weak pulses, imply that RRAT pulses generally\nrepresent the tail of a normal intensity distribution.\n",
"title": "Timing Solution and Single-pulse Properties for Eight Rotating Radio Transients"
}
| null | null | null | null | true | null |
1300
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.