article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
finite element method is one of the popular numerical methods for solving the partial differential equations numerically on digital computers .the standard finite element method is well - suited to solve elliptic partial differential equations efficiently . but this method produces oscillations for hyperbolic partial differential equations ; for example , governing equations of convection dominated flows require additional stabilization for the standard finite element based discretization . for such flows , many stabilized finite element methods are available in the literature like streamline - upwind petrov galerkin ( supg ) , discontinuous - galerkin method , taylor galerkin method , galerkin least - squares method etc .the detailed discussion about these methods are given in . among them , supg method is one of the popular stabilized finite element methods used to solve high speed compressible flows governed by euler equations .this method introduces diffusion along the streamline direction of the flow which makes it stable .apart from diffusion requirement along the streamline , supg method needs additional diffusion across the high gradient regions especially for multidimensional case .this diffusion can be controlled by using a shock capturing parameter which senses the shock region and adds the diffusion appropriately .many nonlinear discontinuity capturing terms are available in the literature ) . in the finite volume methods , kinetic schemes ( also known as boltzmann scheme )are interesting alternatives for the popular riemann solvers .development of these schemes are based on the fact that one can recover the euler equations by applying a suitable moment method strategy to the boltzmann equation .the boltzmann equation is given by where and are velocity distribution function and molecular velocity respectively .the right hand side is the collision term and left hand side consists of an unsteady term and a convection term .the well - known bgk model simplifies the collision term and converts the otherwise integro - differential equation to a partial differential equation with a relaxation source term . using an operator splitting strategy by which the solution of the boltzmann equation is split into a convection step and a collision step and further employing an instantaneous relaxation to equilibrium in the collision step leads to a simplification which is often used in boltzmann schemes .the moments of the resulting boltzmann equation then leads to euler equations of gas dynamics , with the equilibrium distribution being a maxwellian .the euler equations can then be written in the following moment form . here is an appropriate moment and is the moment function vector .one can also obtain the burgers equation using same procedure stated above by defining an appropriate equilibrium distribution function .the advantage of this procedure is , instead of dealing with a nonlinear hyperbolic system of equations ( euler equations ) we are dealing with a linear scalar equation ( boltzmann equation without collision term ) .there are many kinetic schemes available in the literature like beam scheme of sanders & prendergast , the method of rietz , the equilibrium flux method of pullin , kinetic flux vector splitting ( kfvs ) method of deshpande , the compactly supported distribution based methods of kaniel and perthame , the peculiar velocity based upwind ( pvu ) method of raghurama rao & deshpande and the bgk scheme of prendergast & xu .these methods were developed in the framework of finite difference or finite volume methods .the application of finite element methods in the framework of kinetic schemes is of currently ongoing interest , with some of the works in this category being due to deshpande & pironneau and deshpande , yu & dai , khobalatte & leyland , tang & warnecke , liu ans xu , ren _ , gassner . in this paperan attempt has been made to take advantage of the strategy of kinetic schemes for developing an efficient supg scheme in the framework of boltzmann schemes . along with this novel scheme ( ksupg ) , we have also developed a simple shock capturing parameter which senses the jump inside the element for 2d euler equations . however , unlike the traditional supg method , the shock capturing parameter is needed only for 2d euler equations ( not in one dimension ) and not even for 2d burgers equation .constructing the stabilization parameter ( which is the intrinsic time scale ) in mutidimensional supg framework is not a trivial task .many methods are available in the literature .but , in the proposed ksupg scheme is defined for both scalar and vector equations simply from the linear scalar formulation .the efficiency of the new scheme is demonstrated by solving various test cases .this paper is arranged as follows . in section 2governing equations for high speed flows ( euler equations ) and scalar burgers equation are given .section 3 and 4 give the 1d and 2d explicit ksupg weak formulation for both burgers equation and euler equations .in section 5 , a simple shock capturing parameter is introduced .section 6 explains the spectral stability analysis for explicit ksupg scheme .an implicit ksupg formulation for 1d and 2d euler equations is given in section 7 followed by section 8 where various numerical test cases for both explicit and implicit formulation are solved . before ending section 8 , comparison for explicit and implicit ksupg schemesis made based on the number of iterations required to bring down the residue below the specified tolerance limit , the computational cost and the sparsity pattern of the coefficient matrix of global system of equations .the governing equations are for the inviscid compressible flows , given by euler equations as \ ] ] where ^t ] is the inviscid flux vector . are density , velocity components in and directions , total energy and pressure respectively and is a kronecker delta .total energy is given by note that is an inviscid flux jacobian matrix for the domain ( where is the spatial dimension ) with boundary and final time is given by .as the eigenvalues of are real and eigenvectors are linearly independent , the system of equations is hyperbolic . beyond being hyperbolic ,these equations are nonlinear and produce shock waves , expansion waves and contact discontinuities which need to be resolved in numerical simulations .we also consider a scalar hyperbolic conservation law as \ ] ] where is the conserved variable .the fluxes can be linear or nonlinear .one example is the inviscid burgers equation in the the fluxes are nonlinear and produce shock waves and expansion waves .the standard galerkin finite element approximation for molecular velocity distribution function is where the domain is divided into elements . defining the appropriate test and trial functions spaces as and where is the dirichlet boundary , the weak formulation is written as , find such that where .the global system of equations are obtained as where basis functions .it is important to note that , the test function is enriched with additional term which is multiplied only with the convection term .that gives a required diffusion term . in matrix form , where mass matrix , convection matrix , diffuion matrix and neumann boundary condition are given by all the integralsare evaluated with full gauss - quadrature integration .taking moments with the suitable moment function vector equation is the semi - discrete weak formulation .the 1d burgers equation is given by \ ] ] for the sake of convenience , we write the flux as with . in case of one dimensional burgers equation and maxwellian distribution function to recover the burgers equation as a moment from the boltzmann equation is given by where is constant to recover the linear convection equation and is a function of , _i.e. _ , to recover the inviscid burgers equation .for one dimensional problem .let us now evaluate the terms for the case of one dimensional burgers equation . where and .note that , since no energy equation is involved ( no pressure and temperature terms ) so , is just a constant value calculated from the standard maxwellian distribution function .moments of last expression lead to the neumann boundary condition in macroscopic variable . substituting these values in equation ,we get where is the neumann boundary condition in macroscopic variable . in this work finite difference approachis adopted for temporal discretization using method as \right ) \nonumber \\ & + \theta \left ( cc^nu^n + \frac{h}{2 } d \left [ c^{n+1}u^{n+1}\,\text{erf}(s ) + \frac{u^{n+1}}{\sqrt{\pi \beta } } e^{-s^2 } \right ] \right ) + u_n = 0\end{aligned}\ ] ] thus , gives an explicit method and gives an implicit method .semi - implicit methods can be obtained with .for example , gives crank - nicolson method .the 1d euler equations are given by where are the solution vector and the flux vector respectively .for recovering the 1d euler equations as moments of the boltzmann equation , the maxwellian distribution function is given by where is the molecular velocity , is the internal energy variable corresponding to the non - translational degrees of freedom and is the average internal energy corresponding to the non - translational degrees of freedom which is given by and is the ratio of specific heats . for 1d euler equations , moment function vector defined as with the maxwellian the other terms in weak formulation given by equation can be obtained as taking the first integral term similarly , second integral term can be evaluated as where here , , . substituting these values in equation and then simplifying we get substituting these values in we get temporal discretizationis done using method with as in the above discretized form the flux vector is written as where is the flux jacobian matrix given by \ ] ] the global nonlinear fully discretized equation of the form can be linearized by using picard iteration technique as then , the linearized system of equations is solved using bi - conjugate gradient stabilized method .the standard galerkin finite element approximation for molecular velocity distribution function is where the domain is divided into elements . the test and trial functions spaces as and where is the dirichlet boundary , the weak formulation is written as , find such that \right ) \ , d\omega_i = 0\end{aligned}\ ] ] where and .the global system of equations are obtained as \right ) \ , d\omega = 0\ ] ] or where basis functions .again , the enriched terms present in the test function are multiplied only with convective terms which gives diffusion terms in , directions and cross - diffusion terms in directions .+ in matrix form , where all integrals are evaluated using full gauss - quadrature integration .taking moments now lets evaluate these moments for 2d burgers equation and 2d euler equations .the 2d burgers equation is given by is written as where and can be functions of for obtaining nonlinearity or can be constants for keeping them as linear .maxwellian distribution function for recovering the 2d burgers equation as a moment of the boltzmann equation is given by here , and value of is fixed as unity .for 2d burgers equation one can evaluate the integrals as \\< \psi , \text{sign}(v_2)\ , v_2 f^m > & = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty } \text{sign}(v_2)\,v_2 f^m \ , dv_1 dv_2 \nonumber \\ & = \sqrt{\frac{\beta}{\pi } } u \left [ \frac{e^{-s_2 ^ 2}}{\pi } + c_2\text{erf}(s_2 ) \right ] \\< \psi , \text{sign}(v_2)\,v_1 f^m > & = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty } \text{sign}(v_2)\,v_1 f^m \ , dv_1 dv_2 \nonumber \\& = \sqrt{\frac{\beta}{\pi } } uc_1 \text{erf}(s_2 ) \\ < \psi ,\text{sign}(v_1)\,v_2 f^m > & = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty } \text{sign}(v_1)\,v_2 f^m \ , dv_1 dv_2 \nonumber \\ & = \sqrt{\frac{\beta}{\pi } } uc_2 \text{erf}(s_1 ) \end{aligned}\ ] ] the 2d euler equations are given by where are the solution vector and the flux vectors in and directions resptively . in case of 2d euler equations , the maxwellian distribution function is given as where and are molecular velocities in and directions and is defined as the vector is defined as and .integrals are evaluated as flux vector is further decompose by using homogenity property where \nonumber\ ] ] similarly , flux vector is further decompose by usign homogenity property where \nonumber\ ] ] +\frac{2u_1}{\beta } e^{-s_1 ^ 2 } + \frac{u_1 ^ 2}{\sqrt{\beta } } \text{erf}(s_1 ) \sqrt{\pi } \right ) \\ \end{array } \right \}\end{aligned}\ ] ] similarly , +\frac{2u_2}{\beta } e^{-s_2 ^ 2 } + \frac{u_2 ^ 2}{\sqrt{\beta } } \text{erf}(s_2 ) \sqrt{\pi } \right ) \end{array } \right \}\end{aligned}\ ] ] as usual temporal discretization is done by using method with for explicit ksupg scheme .the nonlinear system of equations is linearized using picard iteration technique and is solved by using bi - conjugate gradient stabilized method .in case of multidimensional ksupg method , diffusion along streamline direction is not sufficient to suppress the oscillations near high gradient region .hence additional diffusion terms with a shock capturing parameter is required which can sense these high gradient regions and adds additional diffusion .there are many shock capturing parameters available in the literature . in this work we present a simple gradient based shock capturing parameter as follows .we define a simple element - wise gradient based shock capturing parameter which introduces diffusion along high gradient directions .figure [ fig:4nq ] ( a ) shows a typical four node quadrilateral element . as shown in figure , the maximum change in ( where could be density , temperature or even pressure ; in present work , density is used for all numerical test cases because density is the only primitive variable which jumps across all the three waves : shocks , expansion waves and contact disctonintuies ) occurs across node 1 and 3 .the element based shock capturing parameter is then defined for node 1 and 3 as [ h ! ] where subscripts 1 and 3 represent node numbers . for nodes 2 and 4 ,it is defined as here . for most of the test cases works fine . at element level matrix form, the shock capturing parameter is given by \ ] ] the upper and lower bound on the value is given by it is important to note that , the addition of extra shock capturing term in the weak formulation makes the formulation inconsistent with the original equation .thus , we define such that as , should disappear .this condition is achieved by including in the numerator , which vanishes as we refine the mesh .similarly , one can define such a delta parameter for triangular elements shown in figure [ fig:4nq ] ( b ) .the additional diffusion term along with the shock capturing parameter is then given by where is the global matrix obtained by assembly and are the diffusion matrices in direction respectively .these diffusion matrices are defined in 2d euler ksupg formulation .stability analysis of a numerical scheme gives the acceptable value of time step within which the scheme is stable . in other words , error does not grow with time .unlike von neumann stability analysis , spectral stability analysis includes the boundary points too . in the following analysis, we consider the 2d weak formulation of a linear equation .the global system of equation can be written as where is amplification matrix and , are the numerical solution at time level and respectively .let be the exact solution , then the error is given by substituting this in equation one can obtain rearranging , we get for stable solution which gives .we use the following relation is the spectral radius of amplification matrix . thus , error remains bounded when the maximum eigenvalue of amplification matrix is less than or equal to unity . to find matrix , we use explicit weak formulation for 2d scalar linear problem which is given by u^n \right .\nonumber \\ & \left .+ d_{xy } ( c_2 \text{erf}(s_1)+ c_1 \text{erf}(s_2 ) ) u^n + d_y \left [ \frac{e^{-s_2 ^ 2}}{\pi } + c_2\text{erf}(s_2 ) \right ] u^n \right\ } = 0 \nonumber\end{aligned}\ ] ] here and are constants . rearranging above equation, we get \right .\right.\nonumber \\ & \left .\left . + d_{xy } ( c_2 \text{erf}(s_1)+ c_1 \text{erf}(s_2 ) ) + d_y \left [ \frac{e^{-s_2 ^ 2}}{\pi } + c_2\text{erf}(s_2 ) \right ] \right\ } \right ) u^n \nonumber\end{aligned}\ ] ] which gives matrix as \right .\nonumber \\ & \left . \left .\left.+ d_{xy } ( c_2 \text{erf}(s_1)+ c_1 \text{erf}(s_2 ) ) + d_y \left [ \frac{e^{-s_2 ^ 2}}{\pi } + c_2\text{erf}(s_2 ) \right ] \right\ } \right ) \right\ } / m \nonumber\end{aligned}\ ] ] using the stability condition given by equation , we get where + d_{xy } ( c_2 \text{erf}(s_1)+ c_1 \text{erf}(s_2 ) ) + d_y \left [ \frac{e^{-s_2 ^ 2}}{\pi } + c_2\text{erf}(s_2 ) \right]\ ] ] the maximum eigenvalue ( which is the absolute maximum of all eigenvalues ) is computed numerically using rayleigh quotient over a grid for 2d linear convection equation with unity wavespeeds in both directions .the initial condition is a cosine pulse convecting diagonally in a square domain \times [ 0,1] ] , the left boundary is the inlet with mach 2 at an angle of to the bottom boundary .bottom boundary is the wall from where oblique shock wave is generated which makes an angle of with the wall .the dirichlet boundary conditions on left and top boundaries are . at the wall ,no - slip condition is applied , _i.e. _ , where is a velocity vector in two dimensions and at right boundary where the flow is supersonic all primitive variables , , and are extrapolated with first order approximation .[ h ] q4 elements ., title="fig : " ] figure [ fig : osi312 ] shows the pressure contours using q4 mesh .in this test case the domain is rectangular \times [ 0,\,\,1]$ ] .the boundary conditions are 1 . inflow ( left boundary ) : 2 . post shock condition ( top boundary ) : 3 .bottom boundary is a solid wall where slip boundary condition is applied , _i.e. _ , .4 . at right boundary where the flow is supersonic all primitive variables , , and are extrapolated with first order approximation .pressure plots for , and quadrilateral mesh are given in figure [ fig : srq4 ] .the comparison of residue plots are given in figure [ fig : rs123q ] .[ h ! ] , and , title="fig : " ] for triangular unstructured mesh ( number of nodes : 2437 and number of triangles : 4680 ) the pressure contours are given in figure [ fig : unsq34 ] and residue plot is shown in figure [ fig : unsqres ] . [ h ! ] [ h ! ] the incident and reflected shocks are captured quite accurately at correct positions .two supersonic test cases with inflow mach numbers 2 and 3 are tested on a half cylinder .the domain is half circular , the left outer circle is inflow boundary .small circle inside the domain is a cylinder wall and the straight edges on right sides are supersonic outflow boundaries .[ h ! ] pressue plots ( see figure [ fig : halfcyl ] ) show that the bow shock in front of the half - cylinder is captured accurately at the right position in each case which are compared with existing results .all previously solved 1d test cases are again solved for implicit ksupg method .the number of node points are 100 and cfl number is 0.6 .final time is t = 0.01 .figure [ fig : kge1ii ] shows the density , velocity , pressure and mach number plots .[ h ! ] the number of node points are 100 and cfl number is 0.6 .final time is t=0.13 .figure [ fig : kslaii ] shows the density , velocity , pressure and internal energy plots .[ h ! ] the number of node points are 200 and cfl number is 0.6 .final time is t=0.15 .figure [ fig : ksrrii ] shows the density , pressure and velocity plots .[ h ! ] in case of implicit ksupg scheme , 2d euler test cases are solved using cfl = 1 . for the comparison of explicit and implicit ksupg schemes shock reflection test case is solved using different grids .figure [ fig : srimk ] shows the pressure contour plot on q4 grid and the residue vs number of iterations . [ h ! ] mesh and the residue plot.,title="fig : " ] iteration speed - up ratio is defined as the ratio of number of iterations required for the residue to drop below a predefined tolerance value for an explicit method to that of an implicit method . similarly , one can define the computational speed - up ratio which is the ratio of total computational time required for explicit method to that of implicit method .following tables shows comparison of computational cost and number of iterations taken for explicit and implicit ksupg schemes for oblique shock reflection test case . [ cols="^,^,^,^ " , ]in this paper we presented a novel explicit as well as implicit kinetic theory based streamline upwind petrov galerkin scheme ( ksupg scheme ) in finite element framework for both scalar case ( inviscid burgers equation in 1d and 2d ) and vector case ( 1d and 2d euler equations of gas dynamics ) .the proposed numerical scheme is simple and easy to implement .the important advantage in using a kinetic scheme in finite element framework is that , instead of dealing with nonlinear hyperbolic conservation laws , one needs to deal with a simple linear convection equation . in comparison with the standard supg scheme ,the advantage of the proposed scheme is that , it does not contain any complicated expression for ( which is the intrinsic time scale ) especially for vector equations .also , for the multidimensional burgers equation , standard supg scheme requires additional diffusion term ( shock capturing parameter ) which is not required in the proposed scheme .the accuracy and robustness of the scheme is demonstrated by solving various test cases for burgers equation and euler equations .spectral stability analysis is done for 2d linear equation which gives an implicit expression of stable time step .finally , comparison between explicit and implicit versions of ksupg scheme is done with respect to the number of iterations , computational cost , sparsity pattern and the condition number of a global system of equations .tezduyar te , huges tjr , .finite element formulation of convected dominated flows with particular emphasis on the compressible euler equations . in : proceedings of aiaa 21st aerospace sciences meeting .aiaa paper 83 - 0125 , reno , nevada ( 1982 ) .le beau gj , tezduyar te , finite element computation of compressible flows with the supg formaltion . in : advances in finite element analysis in fluid dynamics , fed - vol .123 , asme , new york , pp 21 - 27 ( 1991 ) .x. yu and q. dai , rkdg finite element schemes combined with a gas - kinetic method in one - dimensional compressible euler equations , current trends in scientific computing , z. chen , r. glowinski and k. li ( eds . ) , american mathematical society , pp .355 - 364 , 2002 .yee , r.f.warming and a. harten , a high - resolution numerical technique for inviscid gas - dynamics problems with weak solutions , proceedings in eight international conference on numerical methods in fluid dynamics , lecture notes in physics , vol .170 , springer , new york / berkin , pp .546 - 552 , 1982 .
|
a novel explicit and implicit kinetic streamlined - upwind petrov galerkin ( ksupg ) scheme is presented for hyperbolic equations such as burgers equation and compressible euler equations . the proposed scheme performs better than the original supg stabilized method in multi - dimensions . to demonstrate the numerical accuracy of the scheme , various numerical experiments have been carried out for 1d and 2d burgers equation as well as for 1d and 2d euler equations using q4 and t3 elements . furthermore , spectral stability analysis is done for the explicit 2d formulation . finally , a comparison is made between explicit and implicit versions of the ksupg scheme . kinetic streamlined - upwind petrov galerkin scheme , hyperbolic partial differential equations , burgers equation , euler equations .
|
many fundamental issues of a first year laboratory course are at stake when we use the simple pendulum experiment to measure _ g_. undergraduate texts explains the _ theory _ of an idealized scheme with a point mass attached to an ideal thread oscillating with very small amplitudes to justify the mathematical approximation used in the deduction : see fig .[ pendoli1 ] scheme ( a ) . in _ practice _ , a point mass does not exist , neither does an ideal thread , nor do infinitely small oscillations .hence the problem of the passage from an idealized physical system to a real one must be discussed with students and the limits of a mathematical approximation must be assessed .therefore in the laboratory we start the trial experiment by carefully engineering a systematic error in the first rough measures of _ g _ to emphasize the important difference between theory , which considers a point mass , and the laboratory experiment , which uses an extended body .after that we compare the parameters of two distinct normal distributions by asking students to measure fifty times , say , five and subsequently ten complete oscillations .the data form two gaussian distributions with compatible means but different dispersions .the analysis of these data require : ( i ) identification of a gaussian distribution , ( ii ) comparison of gaussian mean using the central limit theorem , and ( iii ) comparison of standard deviations . in the final part of the experiment students observe damping oscillations and check their mathematical model . herethey have to use their judgment to measure the time constant with two different methods and check the compatibility of the results obtained . in the wider outline of a first laboratory course based on oscillations and waves , this experimentis followed by a second set of two four - hour laboratory sessions with a mass - spring experiment . in this new context students practice : ( a ) the passage from idealized theory to real laboratory conditions , in this case from a theoretical zero - mass spring to a spring with a relevant mass ; ( b ) a straight - line fit ; ( c ) observing damping and determining the more demanding resonance curve and its fwhm .introductory lectures present an explanation of theory and an outline of the experiment . through discussionswe define the variables to be measured , present suitable techniques and procedures , indicate some common errors , and give general advice .we ask students to write a rough plan of the experiment in advance . in the laboratory studentsassemble the apparatus and follow their own plan .assistance is given in terms of discussions and help to slower groups so that most of them proceed together through the various phases of the experiment .we interrupt the work at critical points to ask students to record on the board a summary of data from their lab notebooks for general statistical treatment and plenary discussion .in the introductory lecture we start by discussing how to measure _g _ in our laboratory .we first ask which is the most convenient physical law for that purpose , from the point of view of ease and precision .we are at the beginning of a physics course , so we compare the advantages and disadvantages of the law of free fall and the pendulum . in the end of the discussionit is agreed that the law describing the period of a simple pendulum is the most convenient . + we guide the next decision to prepare the ground for the comparison between two different normal distributions .we ask students to make two sets of measurements of the period .they measure fifty times the time required for five complete oscillations and then repeat the procedure for ten periods . herethey practice comparing distributions with compatible means but different dispersions . since the thread length and the period _t _ can be measured with a precision of a few parts per thousand , it will be possible to detect the effects of very small systematic errors caused by thread rigidity + and the choice of initial conditions .the first practical decision regards the length of thread to be used . after considering many possible values ,we decide to choose the same length for all groups for statistical reasons , say , 500 _ mm _ , on the grounds of not only minimizing relative errors but also having periods of oscillation that are not too long .students are then asked to make a trial experiment to test the apparatus by measuring period _ t _ three times using each time five complete oscillations . the corresponding calculated value for _ g _ has to fall within a few percentage points of its expected value .in this initial phase they check that the apparatus is working properly , practice observing and timing techniques , and on completion copy a summary of data and results on the board , such as shown in the following table , for a brief plenary discussion . [cols="^,^,^,^,^,^,^,^,^",options="header " , ] the curve is transformed into the straight line we discuss with students that it is advisable to organize an experiment with two different methods of measuring a quantity ( in order to study the compatibility of the results , and to check for possible systematic errors ) , and this is especially advisable in our case , in which a value is strongly dependent on the observer s judgment .it was also noticed that the five points show a slowly decreasing local slope indicating that the damping time increases as the oscillation amplitude decreases .the slope of this line ( the inverse of the damping time ) is compared with the value obtained by direct measurement .this experimental observation indicates that the friction ( responsible for the damping ) increases with velocity .the unexpected success of this session ( good compatibility between the data obtained with different experimental paths ) rewards students after their hard work .data gathered over the years clearly shows higher losses with the relatively rigid fishing lines ( confirming the shift of the pole ) , with bobs oscillating outside the original oscillation plane or with larger oscillation amplitudes .given their importance in physics , we have chosen oscillations and waves as the main theme of our laboratory course .after the pendulum , students tackle the mass - spring experiment to revise previous data treatment techniques and to practice the new ones presented in the lectures .the layout scheme is shown in fig .[ schema - massamolla ] we start with the ideal ( undamped ) motion equation this equation assumes a zero - mass spring , which of course can not exist in a laboratory .students measure the spring constant _ k _ using both hooke s law and angular speed measurements according to comparison of both graphs ( [ hook+omega ] ) clearly shows a discrepancy between the two values of _ k_. to get round of this discrepancy students have to analyze the motion equations in detail .they conclude that theory does not take into consideration the mass of the spring .+ then , as in the previous experiment , the damping time is measured .this result is cross - checked with the full - width - half - maximum ( fwhm ) of the resonance curve .the equation describing the complete motion is treated theoretically in conjunction with the motion of forced oscillations .+ the resonance curve measurements require a substantial upgrade of the experimental apparatus to cause and to detect the oscillations .students practice on line data acquisition and gather values for the resonance curve .the last part of this rather complex and demanding experiment is the measurement of the phase curve as a function of the frequency of the applied force .all measurements and their statistical treatment ( gaussian distribution , compatibility of mean values and of standard deviations , curve fitting and confidence level ) come out as expected from theory . the technique involved in the experiments is very simple but not trivial .the content in terms of physics is dense because we have to treat the relation between theory and experiment , the weight of approximations , the kind of thread used , and finally the losses dependent on thread rigidity , bob speed and oscillation path .the content in terms of data processing is equally dense because we have to deal with arithmetic and weighted means , comparison of standard deviations , gaussian distributions of errors , internal compatibility between data and expected results , cross - checking of measurements subject to judgment , test of an exponential decay by means of a straight line fit .+ measuring the gravitational acceleration with the pendulum can be rewarding when the expected result of an important physical quantity is obtained after hours of hard work . mere solitary data collection can discourage first year students , therefore plenary discussions and subsequent achievement of compatible results in difficult experimental assets can help to reward students , some of which are facing practicals for the first time . following the pendulum experiment with a free and then forced oscillation of a mass - spring experiment gives a practical grounding of shm theory with a single degree of freedom . by choosing oscillations and waves as a theme for our first year lab course , we provide students with a coherent context in which to practice basic techniques , use of new instruments , and experimental data treatment .the general motion of a simple pendulum is a composed of the center of mass oscillation and of an oscillatory rotation of the disk around its center of mass .the system energy has two kinetic components thus the total kinetic energy is in the kinetic energy formula the length of the pendulum with extended mass is the distance from the pole to the center of mass .since the two angular velocities are equal 50 many topics of interest about the pendulum can be found in : michael e. matthews , colin f. gauld , arthur stinner ( eds . ) , _ the pendulum .scientific , historical , philosophical and educational perspectives _ ( springer , dordrecht , 2005 ) .cesar medina , sandra velazco , julia salinas , experimental control of simple pendulum model , " science & education , * 13 * ( 7 ) , 631 - 640 ( 2004 ) .robert a. nelson , m.g .olsson , the pendulum .rich physics from a simple system , " am .* 54 * ( 2 ) , 112 - 121 ( 1986 ) .james l. anderson , approximations in physics and the simple pendulum , " am. j. phys .* 27 * ( 3 ) , 188 - 189 ( 1959 ) . for a thorough analysis of this approximated formula : l.h .cadwell , e.r .boyko , linearization of the simple pendulum " am . j. phys .* 59 * ( 11 ) , 979 - 981 ( 1991 ) .a thorough analysis of the simple pendulum with an extended spherical bob is in : j.v .hughes , possible motions of a sphere suspended on a string ( the simple pendulum ) , " am . j. phys .* 21 * ( 1 ) , 47 - 50 ( 1953 ) .armstrong , effect of the mass of the cord on the period of a simple pendulum , " * 44 * ( 6 ) , 564 - 566 ( 1976 ) . s.t .epstein , & m.g .olsson , comment on `` effect of the mass of the cord on the period of a simple pendulum '' , " am . j. phys .* 45 * ( 7 ) , 671 - 672 ( 1975 ) .we considered linear damping . for non - lineardamping , see : randall d. peters & tim pritchett , the not - so - simple harmonic oscillator , " am . j. phys .* 65 * ( 11 ) , 1067 - 1073 ( 1997 ) .p. squire , pendulum damping , " am . j. phys .* 54 * ( 11 ) , 984 - 991 ( 1986 ) .
|
the main task of an introductory laboratory course is to foster students manual , conceptual and statistical ability to investigate physical phenomena . needing very simple apparatus , pendulum experiments are an ideal starting point in our first - year laboratory course because they are rich in both physical content and data processing . these experiments allow many variations , e.g. pendulum bobs can have different shapes , threads can be tied to a hook at their edge or pass through their centre of mass , they can be hanged as simple or bifilar pendulums . in these many variations , they emphasize the difference between theory and practice in the passage from an idealized scheme to a real experimental asset , which becomes evident , for example , when the pendulum bob can not be considered an idealized point mass . moreover , they require careful observation of details such as the type of thread used and its rigidity or the bob initial slant , which leads to different behaviors . their mathematical models require a wide range of fundamental topics in experimental data analysis : arithmetic and weighted mean , standard deviation , central limit theorem application , data distribution , and the significant difference between theory and practice . setting the mass - spring experiment immediately after the pendulum highlights the question of resonance , revises the gap between theory and practice in another context , and provides another occasion to practice further techniques in data analysis .
|
the semantic web and its linked data movement have brought us many great , interlinked and freely available machine readable rdf datasets , often summarized in the linking open data cloud .being extracted from wikipedia and spanning many different domains , dbpedia forms one of the most central and best interlinked of these datasets . nevertheless , even with all this easily available data , using it is still very challenging : for a new question , one needs to know about the available datasets , which ones are best suited to answer the question , know about the way knowledge is modelled inside them and which vocabularies are used , before even attempting to formulate a suitable sparql query to return the desired information .the noise of real world datasets adds even more complexity to this . in this paperwe present a graph pattern learning algorithm that can help to identify sparql queries for a relation between node pairs in a given knowledge graph is a set of rdf triples , typically accessible via a given sparql endpoint . ] , where is a source node and a target node . can for example be a simple relation such as `` given a capital return its country '' or a complex one such as `` given a stimulus return a response that a human would associate '' . to learn queries for from , without any prior knowledge about the modelling of in , we allow users to compile a ground truth set of example source - target - pairs as input for our algorithm .for example , for relation between capital cities and their countries , the user could generate a ground truth list ( , ) , ( , ) , ( , ) .given and the dbpedia sparql endpoint , our graph pattern learner then learns a set of graph patterns such as : : \ { } : \ { } in this paper , a graph pattern is an instance of the infinite set of sparql basic graph patterns .each has a corresponding sparql ask and select query .we denote their execution against as and .the graph patterns can contain sparql variables , out of which we reserve and as special ones .a mapping can be used to bind variables in before execution .the resulting learned patterns can either be inspected or be used to predict targets by selecting all bindings for given a source node : for example , given the source node the pattern can be used to predict . the remainder of this paper is structured as follows : we present related work in section [ sec : relwork ] , before describing our graph pattern learner in detail in section [ sec : gp_learner ] . in sections[ sec : visualisation ] and [ sec : prediction ] we will then briefly describe visualisation and prediction techniques before evaluating our approach in section [ sec : eval ] .to the best of our knowledge , our algorithm is the first of its kind .it is unique in that it can learn a set of sparql graph patterns for a given input list of source - target - pairs directly from a given sparql endpoint .additionally , it can cope with scenarios in which there is not a single pattern that covers all source - target - pairs .many other algorithms exist , which learn vector space representations from knowledge graphs .an excellent overview of such algorithms can be found in .we are however not aware that any of these algorithms have the ability of returning a list of sparql graph patterns that cover an input list of source - target - pairs .there are other approaches that help formulating sparql queries , mostly in an interactive fashion such as relfinder or autosparql .their focus however lies on finding relationships between a short list of entities ( not source - target - pairs ) or interactively formulating sparql queries for a list of entities of a single kind. they can not deal with entities of different kinds .sparql pattern learning , there is an approach for pattern based feature construction that focuses on learning sparql patterns to use them as features for binary classification of entities .it can answer questions such as : does an entity belong to a predefined class ?in contrast to that , our approach focuses on learning patterns between a list of source - target - pairs for entity prediction : given a source entity predict target entities . to simulate target entity prediction for a single given source with binary classification, one would need to train classifiers , one for potential target entities . in the context of mining patterns for human associations and linked data , we previously focused on collecting datasets of semantic associations directly from humans , ranking existing facts according to association strengths and mapping the edinburgh associative thesaurus to dbpedia .none of these previous works directly focused on identifying existing patterns for human associations in existing datasets .the outline of our graph pattern learner is similar to the generic outline of evolutionary algorithms : it consists of individuals ( in our case sparql graph patterns ) , which are evaluated to calculate their fitness. the fitter an individual is , the higher its chance to survive and reach the next generation .the individuals of a generation are also referred to as population . in each generationthere is a chance to mate and mutate for each of the individuals .a population can contain the same individual ( graph pattern ) several times , causing fitter individuals to have a higher chance to mate and mutate over several generations . as mentioned in the introduction ,the training input of our algorithm is a list of ground truth source - target - pairs .due to size limitations , we will focus on the most important aspects of our algorithm in the following . for further detailplease see our website where you can find the source - code , visualisation and other complementary material . before describing the realisation of the components of our evolutionary learner , we want to introduce our concept of coverage .we say that a graph pattern covers , models or fulfils a source - target - pair if the evaluation of its sparql ask query returns true : our algorithm is not limited to learning a single best pattern for a list of ground truth pairs , but it can learn multiple patterns which together cover the list .we realise this by invoking our evolutionary algorithm in several _ runs_. in each run a full evolutionary algorithm is executed ( with all its generations ) . after each runthe resulting patterns are added to a global list of results . in the following runs ,all ground truth pairs which are already covered by the patterns from previous runs become less rewarding for a newly learnt pattern to cover . overits runs our algorithm will thereby re - focus on the left - overs , which allows us to maximise the coverage of all ground truth pairs with good graph patterns . in order to evaluate the fitness of a pattern , we define the following dimensions to capture what makes a pattern `` good '' . *high _ recall _ : + a good pattern fulfils as many of the given ground truth pairs as possible : * high _ precision _ : + a good pattern should also be precise . for each individual ground truthpair we can define the precision as : the target should be in the returned result list and if possible nothing else . in other words , we are not searching for patterns that return thousands of potentially wrong target for a given source . over all ground truth pairs, we can define the average precision for via the inverse of the average result lengths : * high _ gain _ : + a pattern discovered in run is better if it covers those ground truth pairs that are nt covered with high precisions in previous runs ( already : similarly , the potentially remaining gain can be computed as : * no _ over - fitting _ : + while precision is to be maximised , a good pattern should not _ over - fit _ to a single source or target from the training input .* short _ pattern length _ and low _ variable count _ : + if all other considerations are similar , then a shorter pattern or one with less variables is preferable . note , that this is a low priority dimension . a good pattern is not restricted to a shortest path between and .good patterns can be longer and can have edges off the connecting path ( e.g. , see in section [ sec : intro ] ) . *low execution _ time _ & _ timeout _ : + last but not least , to have any practical relevance , good patterns should be executable in a short _ time_. especially during the training phase , in which many queries are performed that take too long , we need to make sure to early terminate such queries on both , the graph pattern learner and the endpoint ( cf . section [ sec : real_world_considerations ] ) . in casethe query was aborted due to a _ timeout _ and only a partial result obtained , it should not be trusted .based on these considerations , we define the _ fitness _ of an individual graph pattern as a tuple of real numbers with the following optimization directions . when comparing the fitness of two patterns , the fitness tuples for now are compared lexicographically . 1 .* remains * ( max ) : remaining precision sum in the current run ( see section [ sec : coverage ] ) .patterns found in earlier runs are considered better .* score * ( max ) : a derived attribute combining gain with a configurable multiplicative punishment for over - fitting patterns .* gain * ( max ) : the summed gained precision over the remains of the current run : . in case of timeouts or incomplete patternsthe gain is set to 0 .* -measure * ( max ) : -measure for precision and recall of this pattern .* average result lengths * ( min ) : .* recall ( ground truth matches ) * ( max ) : . 7 . * pattern length * ( min ) : the number of triples this pattern contains . 8 .* pattern variables * ( min ) : the number of variables this pattern contains . 9 . * timeout * ( min ) : punishment term for timeouts ( 0.5 for a soft and 1.0 for a hard timeout ) ( see section [ sec : real_world_considerations ] and gain ) . 10 . *query time * ( min ) : the evaluation time in seconds .this is particularly relevant since it hints at the real complexity of the pattern .i.e. , a pattern may objectively have a small number of triples and variables , but its evaluation could involve a large portion of the dataset . in order to start any evolutionary algorithm an initial population needs to be generated .the main objective of the first population is to form a starting point from which the whole search space is reachable via mutations and mating over the generations . while the initial population is not meant to immediately solve the whole problem , a poorly chosen initial population results in a lot of wasted computation time .the starting point of our algorithm are single triple sparql bgp queries , consisting only of variables with at least a and variable , e.g. : \ { } while having a small chance of survival ( direct evaluation would typically yield bad fitness ) , such patterns can re - combine ( see mating in section [ sec : mating ] ) with other patterns to form good and complete patterns in later generations . for prediction capabilities , we are searching graph patterns which connect and, our algorithm mostly fills the initial population with path patterns of varying lengths between and .initially such a path pattern purely consists of variables and is directed from source to target : \ { } for example a pattern of desired length of looks like this : \ { } as longer patterns are less desirable , they are generated with a lower probability .furthermore , we randomly flip each edge of the generated patterns , in order to explore edges in any direction . in order to reduce the high complexity and noise introduced by patterns only consisting of variables, we built in a high chance to immediately subject them to the fix variable mutation ( see section [ sec : mutation ] ) . in each generationthere is a configurable chance for two patterns to mate in order to exchange information . in our algorithmthis is implemented in a way that mating always creates two children , having the benefit of keeping the amount of individuals the same .each child has a dominant and a recessive parent .the child will contain all triples that occur in both parents .additionally , there is a high chance to select each of the remaining triples from the dominant parent and a low chance to select each of the remaining triples from the recessive parent . by thisthe children have the same expected length as their parents .furthermore , as variables from the recessive parent could accidentally match variables already being in the child , and this can be beneficial or not , we add a 50 % chance to rename such variables before adding the triples . besides mating , which exchanges information between two individuals ,information can also be gained by mutation .each individual in a population has a configurable chance to mutate by the following ( non exclusive ) mutation strategies .currently , all but one of the mutation operations can be performed on the pattern itself ( local ) without issuing any sparql queries .the mutation operations also have different effects on the pattern itself ( grow , shrink ) and on its result size ( harden , loosen ) . ** introduce var * select a component ( node or edge ) and convert it into a variable ( loosen ) ( local ) * * split var * select a variable and randomly split it into 2 vars ( grow , loosen ) ( local ) * * merge var * select 2 variables and merge them ( shrink , harden ) ( local ) * * del triple * delete a triple statement ( shrink , loosen ) ( local ) * * expand node * select a node , and add a triple from its expansion ( grow , harden ) ( local for now ) * * add edge * select 2 nodes , add an edge in between if available ( grow , harden ) ( local for now ) * * increase dist * increase distance between source and target by moving one a hop away ( grow ) ( local ) * * simplify pattern * simplify the pattern , deleting unnecessary triples ( shrink ) ( local ) ( cf .section [ sec : pattern_simplification ] ) * * fix var * select a variable and instantiate it with an iri , bnode or literal that can take its place ( harden ) ( sparql ) ( see below ) in a single generation sequential mutation ( by different strategies in the order as above ) is possible .we can generally say that introducing a variable loosens a pattern and fixing a variable hardens it .patterns which are too loose will generate a lot of candidates and take a long time to evaluate .patterns which are too hard will generate too few solutions , if any at all .very big patterns , even though very specific can also exceed reasonable query and evaluation times .unlike the other mutations , the fix var mutation is the only one which makes use of the underlying dataset via the sparql endpoint , in order to instantiate variables with an iri , bnode or literal .as it is one of the most important mutations and also because performing sparql queries is expensive , it can immediately return several mutated children . fora given pattern we randomly select one of its variables ( excluding and ) .additionally , we sample up to a defined number of source - target - pairs from the ground truth which are not well covered yet ( high potential gain ) . for each of these sampled pairs issue a sparql select query of the form : \ { } we collect the possible instantiations for , count them over all queries and randomly select ( with probabilities according to their frequencies ) up to a defined number of them .each of the selected instantiations forms a separate child by replacing in the current pattern .after each generation the next generation is formed by the surviving ( fittest ) individuals from tournaments of randomly sampled individuals from the previous generation .we also employ two techniques , to counter population degeneration in local maxima and make our algorithm robust ( even against non - optimal parameters ) : * in each generation we re - introduce a small number of newly generated initial population patterns ( see section [ sec : init_population ] ) . *each generation updates a hall of fame , which will preserve the best patterns ever encountered over the generations . in each generationa small number of the best of these all - time best patterns is re - introduced . in the following, we will briefly discuss practical problems that we encountered and necessary optimizations we used to overcome them .we implemented our graph pattern learner with the help of the deap ( distributed evolutionary algorithms in python ) framework .the single most important optimization of our algorithm lies in the reduction of the amount of issued queries by using batch queries .this mostly applies to the queries for fitness evaluation ( section [ sec : fitness ] ) .it is a lot more efficient to run several sub - queries in one big query and to only transport the ground truth pairs to the endpoint once ( via ) , than to ask for each result separately .another mandatory optimization involves the use of timeouts and limits for all queries , even if they usually only return very few results in a short time .we found that a few run - away queries can quickly lead to congestion of the whole endpoint and block much simpler queries .timeouts are also especially useful as a reliable proxy to exclude too complicated graph patterns .even seemingly simple patterns can take a very long time to evaluate based on the underlying dataset and its distribution .apart from timeouts we use a filter which checks if mutants and children are actually desirable ( e.g. , length and variable count in boundaries , pattern is complete and connected ) , meaning fit to live , before evaluating them . if not , the respective parent takes their place in the new population , allowing for a much larger part of the population to be viable .two other crucial optimizations to reduce the overall run - time of the algorithm are parallelization and client side caching .evolutionary algorithms are easy to parallelize via parallel evaluation of all individuals , but in our case the sparql endpoint quickly becomes the bottleneck .ignoring the limits of the queried endpoint will resemble a denial of service attack . for most of our experiments we hence use an internal lod cache with exclusive access for our learning algorithm . in case the algorithm is run against public endpoints we suggest to only use a single thread in order not to disturb their service ( fair use ) .client side caching further helps to reduce the time spent on evaluating graph patterns , by only evaluating them once , should the same pattern be generated by different sequences of mutation and mating operations . to identify equivalent patterns despite different syntactic surface forms, we had to solve sparql bgp canonicalization ( -complete ) .we were able to to reduce the problem to rdf graph canonicalization and achieve good practical run - times with rgda1 . in the context of caching ,one other important finding is that many sparql endpoints ( especially the widely used openlink virtuoso ) often return incomplete and thereby non - deterministic results by default .unlike many other search algorithms , an evolutionary algorithm has the benefit that it can cope well with such non - determinism .hence , when caching is used , it is helpful to reduce , but not completely remove redundant queries .last but not least , as our algorithm can create patterns that are unnecessarily complex , it is useful to simplify them .we developed a pattern simplification algorithm , which given a complicated graph pattern finds a minimal equivalent pattern with the same result set wrt .the and variables .the simplification algorithm removes unnecessary edges , such as redundant parallel variable edges , edges between and behind fixed nodes and unrestricting leaf branches .after presenting the main components of our evolutionary algorithm in the previous section , we will now briefly present an interactive visualisation .as the learning of our evolutionary algorithm can produce many graph patterns , the visualisation allows to quickly get an overview of the resulting patterns in different stages of the algorithm .pair from the ground truth training set .the darker its colour the higher the precision for the ground truth pair ., title="fig : " ] pair from the ground truth training set .the darker its colour the higher the precision for the ground truth pair ., title="fig : " ] [ fig : visualisation_coverage ] figure [ fig : visualisation_gp_results ] ( left ) shows a screen shot of the visualisation of a single learned graph pattern . in the sidebar the user can select between individual generations , the results of a whole run or the overall results ( default ) to inspect the outcomes at various stages of the algorithm .afterwards , the individual result graph patterns can be selected .below these selection options the user can inspect statistics about the selected graph pattern including its fitness , a list of matching training ground truth pairs and the corresponding sparql select query for the pattern .links are provided to perform live queries on the sparql endpoint . at each of the stages ,the user can also get an overview of the precision coverage of a single pattern ( as can be seen in figure [ fig : visualisation_coverage ] ( right ) ) or the accumulated coverage over all patterns .as already mentioned in the introduction , the learned patterns can be used to predict targets for a given source .the basic idea is to insert a given source in place of the variable in each of the learned patterns and execute a sparql select query over the variable ( c.f ., in section [ sec : intro ] ) . while interesting for manual exploration , for practical prediction purposes the amount of learned graph patternscan easily become too large by discovering many very similar patterns that are only differing in minor aspects .one realisation from visualising the resulting patterns , is that we can use their precision vectors wrt .the ground truth pairs to cluster graph patterns .the -th component of is defined by the precision value corresponding to the -th ground truth source - target - pair : we employ several standard clustering algorithms on and select the best patterns in each cluster as representatives to reduce the amount of queries . by defaultour algorithm applies all of these clustering techniques and then selects the one which minimises the precision loss at the desired number of queries to be performed during prediction . in our testswe could observe , that clustering ( e.g. , with hierarchical scaled euclidean ward clustering ) allows us to reduce the number of performed sparql queries to 100 for all practical purposes with a precision loss of less than .when used for prediction , each graph pattern creates an unordered list of possible target nodes for an inserted source node .we evaluated the following fusion strategies to combine and rank the returned target candidates ( higher fusion value means lower rank ) : * * target occurrences * : a simple occurrence count of each of the targets over all graph patterns . * * scores * : sum of all graph pattern scores ( from the graph pattern s fitness ) for each returned target . * * f - measures * : sum of all graph pattern -measures ( from the graph pattern s fitness ) for each returned target . * * gp precisions * : sum of all graph pattern precisions ( from the graph pattern s fitness ) for each returned target . ** precisions * : sum of the actual precisions per graph pattern in this prediction . by defaultour algorithm will calculate them all , allowing the user to pick the best performing fusion strategy for their use - case .in order to evaluate our graph pattern learner , we performed several experiments which we will describe in the following .we ran our experiments against a local virtuoso 7.2 sparql endpoint containing over g triples , from many central datasets of the lod cloud , denoted as in the following .one of our claims is that our algorithm can learn good sparql queries for a relation represented by a set of ground truth source - target - pairs . in order to evaluate this, we started with simple relations such as `` given a capital return its country '' ( see in section [ sec : intro ] ) .for each , we used a generating sparql query ( such as from section [ sec : intro ] ) to generate , then executed our graph pattern learner and checked if was in the resulting patterns : the result of these experiments is that our algorithm is able to re - identify such simple , readily modelled relations in of our test cases ( typically within the first run , so the first 3 minutes ) . while this might sound astonishing , it is merely a test that our algorithm can perform the simplest of its tasks : if there is a single sparql bgp pattern that models the whole training list in , then our algorithm is quickly able to find it via the fix var mutation in section [ sec : mutate_fix_var ] . due to the page limit , we omit further details and instead turn to a more complex relation in the next section .two additional claims are that our algorithm can learn a set of patterns , which cover a complex relation that is not readily modelled in , and that we can use the resulting patterns for prediction .hence , in the following we focus on one such complex relation : human associations . we will present some of the identified patterns and then evaluate the prediction quality .human associations are an important part of our thinking process .an _ association _ is the mental connection between two ideas : a _ stimulus _( e.g. , `` pupil '' ) and a _ response _( e.g. , `` eye '' ) .we call such associations _ strong associations _ if more than 20 % of people agree on the response . in the following , we focus on a dataset of 727 strong human associations ( corresponding to k raw associations ) from the edinburgh associative thesaurus that we previously already mapped to dbpedia entities .the dataset contains stimulus - response - pairs such as ( , ) , ( , ) and ( , ) .we randomly split our 727 ground truth pairs into a training set of 655 and a test set of 72 pairs ( 10 % random split ) .all training , visualising and development has been performed on the training set in order to reduce the chance of over - fitting our algorithm to our ground truth .we ran the algorithm ( ) on with a population size of 200 , a maximum of 20 generations each in a maximum of 64 runs .the first 5 runs of our algorithm are typically completed within 3 , 6 , 9 , 13 and 15 minutes . in the first couple of minutesall of the very simple patterns that model a considerable fraction of the training set s pairs are found . with the mentioned settingsthe algorithm will terminate after around 3 hours .it finds roughly 530 graph patterns with a score > 2 ( cf .section [ sec : fitness ] ) .due to the page limit , we will briefly mention only 3 notable patterns from the resulting learned patterns in this paper .we invite the reader to explore the full results online with the interactive visualisation presented in section [ sec : visualisation ] .the three notable patterns we want to present here are : \ { } \ { } \ { } the first two are intuitively understandable patterns which typically are amongst the top patterns .the first one shows that human associations often seem to be represented via http://purl.org/linguistics/gold/hypernym [ ] in dbpedia ( the response is often a hypernym ( broader term ) for the stimulus ) .the second one shows that associations often correspond to bidirectionally linked wikipedia articles .the third pattern represents a whole class of intra - dataset learning by making use of a connection of the to babelnet s http://www.w3.org/2004/02/skos/core#exactmatch [ ] .as human associations are not readily modelled in dbpedia , it is difficult to assess the quality of the learned patterns directly .hence , we evaluate the quality indirectly via their prediction quality on the test - set . for each of the we generate a ranked target list $ ] of target predictions .the list is the result of one of the fusion variants ( cf .section [ sec : fusion ] ) after clustering ( cf .section [ sec : query_reduction ] ) .for evaluation , we can then check the rank of in ( lower ranks are better ) . if , we set .an example of a ranked target prediction list ( for the fusion method _ precisions _ ) for source is the ranked list : [ , , , , , , , , , ] . in this casethe ground truth target is at rank .as we can see most of the results are relevant as associations to humans .nevertheless , for the purpose of our evaluation , we will only consider the single corresponding to a as relevant and all other as irrelevant . based on the ranked result lists, we can calculate the recall due to the fact that we only have 1 relevant target per result of any .] , mean average precision ( map ) and normalised discounted cumulative gain of the various fusion variants over the whole test set , as can be seen in table [ tbl : eval ] and figure [ fig : recall ] .we also calculate these metrics for several baselines , which try to predict the target nodes from the 1-neighbourhood ( bidirectionally , incoming or outgoing ) by selecting the neighbour with the highest pagerank , hits score , in- and out - degree . as can be seen , all our fusion strategies significantly outperform the baselines ..recall , map and ndcg for our fusion variants and against baselines . [ cols="<,^,^,^,^,^,^,^,^",options="header " , ] k over the different fusion variants and against baselines . ][ sec : future_work ] in this paper we presented an evolutionary graph pattern learner . the algorithm can successfully learn a set of patterns for a given list of source - target - pairs from a sparql endpoint .the learned patterns can be used to predict targets for a given source .we use our algorithm to identify patterns in dbpedia for a dataset of human associations .the prediction quality of the learned patterns after fusion reaches a recall of 63.9 % and map of 39.9 % , and significantly outperforms pagerank , hits and degree based baselines .the algorithm , the used datasets and the interactive visualisation of the results are available online . in the future, we plan to enhance our algorithm to support literals in the input source - target - pairs , which will allow us to learn patterns directly from lists of textual inputs .further , we are investigating mutations , for example to introduce constraints .we also plan to investigate the effects of including negative samples ( currently we only use positive samples and treat everything else as negative ) .additionally , we plan to employ more advanced late fusion techniques , in order to learn when to trust the prediction of which pattern .as this idea is conceptually close to interpreting the learned patterns as a feature vector ( with understandable and executable patterns to generate target candidates ) , we plan to investigate combinations of our algorithm with approaches that learn vector space representations from knowledge graphs .bizer , c. , lehmann , j. , kobilarov , g. , auer , s. , becker , c. , cyganiak , r. , hellmann , s. : dbpedia - a crystallization point for the web of data .web semantics : science , services and agents on the world wide web 7(3 ) , 154165 ( 2009 ) hees , j. , khamis , m. , biedert , r. , abdennadher , s. , dengel , a. : collecting links between entities ranked by human association strengths . in : eswc . vol .7882 , pp .springer lncs , montpellier , france ( 2013 ) , hees , j. , roth - berghofer , t. , biedert , r. , adrian , b. , dengel , a. : betterrelations : using a game to rate linked data triples . in : ki 2011 : advances in artificial intelligence .. 134138 .springer ( 2011 ) hees , j. , roth - berghofer , t. , biedert , r. , adrian , b. , dengel , a. : betterrelations : collecting association strengths for linked data triples with a game . in : search computing ,. 223239 .springer lncs ( 2012 ) kiss , g.r . , armstrong , c. , milroy , r. , piper , j. : an associative thesaurus of english and its computer analysis . in : the computer and literary studies ,. 153165 .edinburgh university press , edinburgh , uk ( 1973 )
|
efficient usage of the knowledge provided by the linked data community is often hindered by the need for domain experts to formulate the right sparql queries to answer questions . for new questions they have to decide which datasets are suitable and in which terminology and modelling style to phrase the sparql query . in this work we present an evolutionary algorithm to help with this challenging task . given a training list of source - target node - pair examples our algorithm can learn patterns ( sparql queries ) from a sparql endpoint . the learned patterns can be visualised to form the basis for further investigation , or they can be used to predict target nodes for new source nodes . amongst others , we apply our algorithm to a dataset of several hundred human associations ( such as `` circle - square '' ) to find patterns for them in dbpedia . we show the scalability of the algorithm by running it against a sparql endpoint loaded with billion triples . further , we use the resulting sparql queries to mimic human associations with a mean average precision ( map ) of and a recall of .
|
poisson s equation and schrdinger s equation are the central equations for atomistic simulations . in case force fields are used , poisson s equation handles the long range electrostatic interactions .if the forces are calculated quantum mechanically , electronic structure calculations have to be performed .selfconsistent electronic structure calculations require the solution of a system where schrdinger s equation is coupled with poisson s equation .multiscale approaches for the solution of these two central equations are widely used , since they are much more efficient for big system sizes than traditional approaches .the multigrid method has been used for the solution of poisson s equation in the context of classical molecular dynamics simulations and it has been used by various groups for electronic structure calculations .several flavors of multigrid for electronic structure calculations have been proposed : its direct solution by the full approximation scheme , the rayleigh quotient multigrid method and a scheme where the the solution of the linear system of equations arising from the preconditioning step is performed by multigrid .the linear system solved in the preconditioning steps is not schrdinger s equation , but poisson s equation .the reason for this is that it is too difficult to find coarse grained representations of the hamiltonian operator if nonlocal pseudopotentials are used and the deferred defect correction scheme justifies the replacement of the hamiltonian by the laplacian .thus , it turns out that in all cases it is poisson s equation that has to be solved using multigrid .another very promising multiscale approach to electronic structure calculations is the use of wavelets as basis sets . as with any large systematic basis setit is also in the wavelet context very important to use efficient preconditioning schemes .diagonal preconditioning is most widely used . however it would be desirable to have at our disposal more powerful preconditioning schemes . using multigrid ideas for preconditioninghas already been proposed in the context of interpolating wavelets .our discussion of prolongation and restriction schemes based on wavelet theory will show that the class of schemes based on interpolating wavelets is not optimal .because of its central importance for atomistic simulations and because it is a prototype equation for the study of new methods we will from now on consider only poisson s equation the solution of the differential equation eq . [ poisson ] can be written as an integral equation gravitational problems are based on exactly the same equations as the electrostatic problem , but we will use in this article the language of electrostatics , i.e. we will refer to as a charge density .the most efficient numerical approaches for the solution of electrostatic problems are based on eq [ poisson ] rather than eq .[ inteq ] .however preconditioning steps found in these methods can be considered as approximate solutions of eq .[ inteq ] .the fact that the green s function is of long range makes the numerical solution of poisson s equation difficult , since it implies that a charge density at a point will have an non - negligible influence on the potential at a point far away . a naive implementation of eq .[ inteq ] would therefore have a quadratic scaling .it comes however to our help , that the potential arising from a charge distribution far away is slowly varying and does not depend on the details of the charge distribution .all efficient algorithms for solving electrostatic problems are therefore based on a hierarchical multiscale treatment . on the short lengthscales the rapid variations of the potential due to the exact charge distribution of close by sources of charge are treated , on the large length scales the slow variation due to some smoothed charge distribution of far sources is accounted for .since the number of degrees of freedom decreases rapidly with increasing length scales , one can obtain algorithms with linear or nearly linear scaling . in the following, we will briefly summarize how this hierarchical treatment is implemented in the standard algorithms * fourier analysis : if the charge density is written in its fourier representation the different length scales that are in this case given by decouple entirely and the fourier representation of the potential is given by the fourier analysis of the real space charge density necessary to obtain its fourier components and the synthesis of the potential in real space from its fourier components given by eq .[ fourpot ] can be done with fast fourier methods at a cost of where n is the number of grid points .the solution of poisson s equation in a plane wave is thus a divide and conquer approach where the division is into the single fourier components .* multigrid methods ( mg ) : trying to solve poisson s equation by any relaxation or iterative method ( such as conjugate gradient ) on the fine grid on which one finally wants to have the solution leads to a number of iterations that increases strongly with the size of the grid .the reason for this is that on a grid with a given spacing one can efficiently treat fourier components with a wavelength that is comparable to the the grid spacing , but the longer wavelength fourier components converge very slowly .this increase in the number of iterations prevents a straightforward linear scaling solution of eq .[ poisson ] . in the multigrid method , pioneered by a. brandt ,one is therefore introducing a hierarchy of grids with a grid spacing that is increasing by a factor of two on each hierarchic level .in contrast to the fourier method where the charge and the potential are directly decomposed into components characterized by a certain length scale , it is the residue that is passed from the fine grids to the coarse grids in the mg method .the residue corresponds to the charge that would give rise to a potential that is the difference between the exact potential and the approximate potential at the current stage of the iteration .the solution of partial differential equations in a wavelet basis is typically done by preconditioned iterative techniques .the diagonal preconditioning approach , that is based on well established plane wave techniques , will be presented in the next section .the section after the next will introduce multigrid for poisson s equation in a wavelet basis . even though the fundamental similarities between wavelet and multigrid schemes have been recognized by many workers ( such as in ref . ) this sections contains to the best of our knowledge the first thorough discussion of how both methods can profit from each other . within wavelet theory onehas two possible representations of a function , a scaling function representation and a wavelet representation . contrast to the scaling function representation , the wavelet representation is a hierarchic representation .the wavelet at the hierarchic level is related to the mother wavelet by the characteristic length scale of a wavelet at resolution level is therefore proportional to .a wavelet on a certain level is a linear combination of scaling functions at the higher resolution level scaling functions at adjacent resolution levels are related by a similar refinement relation and hence also any wavelet at a resolution level is a linear combination of the highest resolution scaling functions .the so - called fast wavelet transform allows us to transform back and forth between a scaling function and a wavelet representation .let us now introduce wavelet representations of the potential and the charge density different levels do not completely decouple , i.e the components on level , , of the exact overall solution do not satisfy the single level poisson equation within the chosen discretization scheme .this is due to the fact that the wavelets are not perfectly localized in fourier space , i.e. many frequencies are necessary to synthesize a wavelet .however the amplitude of all these frequencies is clearly peaked at a nonzero characteristic frequency for any wavelet with at least one vanishing moment . from the scaling property ( eq . [ scalrel ] ) it follows , that the frequency at which the peak occurs changes by a factor of two on neighboring resolution grids .this suggest that the coupling between the different resolution levels is weak . in the preceding paragraph we presented the mathematical framework only for the one - dimensional case .the generalization to the 3-dim case is straightforward by using tensor products .also in the rest of the paper only the one - dimensional form of the mathematical formulas will presented for reasons of simplicity .it has to be stressed however that all the numerical results were obtained for the three - dimensional case and with periodic boundary conditions .preconditioning requires finding a matrix with a simple structure that has eigenvalues and eigenvectors that are similar to the ones of the matrix in question .the structure has to be simple in the sense that it allows us to calculate the inverse easily .the simplest and most widely used structure in this respect is the structure of a diagonal matrix .as will be shown a diagonal preconditioning matrix can be found in a wavelet basis set and preconditioned conjugate gradient type methods are then a possible method for the solution of poisson s equation expressed in differential form ( eq.[poisson ] ) . as one adds successive levels of wavelets to the basis set the largest eigenvalue grows by a factor of 4 for each level .this can easily be understood from fourier analysis .as one increases the resolution by a factor of 2 ( i.e. increases the largest fourier vector by a factor of 2 ) the largest eigenvalue increases by a factor of 4 .this basic scaling property of the eigenvalue spectrum can easily be modeled by a diagonal matrix , where all the diagonal elements are all equal on one resolution level , but increase by a factor of 4 as one goes to a higher resolution level .it is of course clear that all the details of the true spectrum are not reproduced by this approximations .the true spectrum consists of a large number of moderately degenerate eigenvalues .the spectrum of the approximate matrix consists of a few highly degenerate eigenvalues where each eigenvalue has all the scaling functions of one resolution level as its eigenfunctions .the true eigenfunctions are of course mixtures of scaling functions on different resolution levels , but if the wavelet family is well localized in fourier space the contributions from neighboring resolution levels are weak .the localization in fourier space increases with the number of vanishing moments and therefore this diagonal preconditioning method works for instance much better for lifted interpolating wavelets than for ordinary interpolating wavelets .another line of arguments , that shows the weakness of the diagonal preconditioning , is the following .the preconditioning matrix can also be considered to be the diagonal part of the matrix representing the laplacian .since the diagonal elements increase by a factor of 4 on each higher resolution level , the spectrum of the matrix has again the correct scaling properties .[ factor4 ] involves both the wavelets and their duals since it was written down for the most general context of a petrov - galerkin approach in a biorthogonal basis . in a pure galerkin context or fororthogonal wavelet families the have to be replaced by the s . in the 3-dimensional casethere are three different types of wavelets ( products of 2 scaling functions and 1 wavelet , products of 1 scaling function and 2 wavelets and products of 3 wavelets ) .each type of wavelet gives rise to a different diagonal element , but again all these diagonal elements differ by a factor of 4 on different resolution levels . because of the weak coupling between different resolution levels discussed above , we expect the matrix elements of the laplacian involving wavelets at two different resolutions levels to be small .the numerical examination of the matrix elements ( fig . [ decay ] ) confirms this guess .it also shows that within one resolution level the amplitude of the matrix elements decays rapidly with respect to the distance of the two wavelets and is zero as soon as they do not any more overlap. nevertheless , some matrix elements coupling nearest neighbor wavelets are not much smaller than the diagonal elements .one also finds a few matrix elements between different resolution levels that are less than one order of magnitude smaller than the one within a single resolution level .( 5.,18 . )( -5.,-1.5 ) the fact that all off - diagonal matrix elements are neglected in current precondition schemes explains their relatively slow convergence .it amounts to finding an approximate greens function that is diagonal in a wavelet representation .this is obviously a rather poor approximation .let us nevertheless stress that this diagonal matrix obtained by inverting a diagonal approximation to the laplacian is a much more reasonable approximation for preconditioning purposes than the diagonal part of the greens function .the diagonal part of the greens function has actually completely different scaling properties .the elements increase by a factor of 2 as one goes to higher resolution levels instead of decreasing by a factor of 4 .the multigrid methods to be discussed later include also in an approximative way through gauss - seidel relaxations this off - diagonal coupling within each resolution block as well as the coupling between the different resolutions levels . in the following we will present some numerical results for the solution of the 3-dimensional poisson equation in a wavelet basis using the diagonal preconditioning approach .all the methods presented in this paper will have the property that the convergence rate is independent of the grid size .we have chosen grids for all the numerical examples .the fact that the number of iterations necessary to reach a certain target accuracy is independent of the system size together with the fact that a single iteration involves a cost that is linear with respect to the number of grid points ensures that the poisson s equation can be solved with overall linear scaling .whereas we use here only simple equidistant grids , this linear scaling has already been demonstrated with highly adaptive grids in problems that involve many different length scales .the preconditioning step using simply the diagonal is given by in analogy to eq .[ potrep],[rhorep ] , the s are the wavelet coefficients on the -th resolution level of the residue in a wavelet representation . is the approximate solution at a certain iteration of the solution process .the preconditioned residue is then used to update the approximate potential . in the case of the preconditioned steepest descent method used herethis update simply reads where is an appropriate step size . as discussed above the constant in eq .[ pwprec ] depends in the three dimensional case upon which type of wavelet is implied since it is the inverse of the laplace matrix element between two wavelets of this type .[ sd_pw ] shows numerical results for several wavelet families .the slow convergence of the interpolating wavelets is due to the fact that they have a non - vanishing average and therefore a non - vanishing zero fourier component .hence they are all localized in fourier space at the origin instead of being localized around a non - zero frequency .this deficiency can be eliminated by lifting .the fourier power spectrum of the lifted wavelets tends to zero at the origin with zero slope for the family with two vanishing moments considered here .lifting the wavelet twice leads to 4 vanishing moments and a even better localization in fourier space .the improvement in the convergence rate is however only marginal .the higher 8-th order lifted interpolating wavelet is smoother than its 6-th order counterpart and hence better localized in the high frequency part .this also leads to a slightly faster convergence .( -7.,-1.5 ) ( 1.,-1.5 ) combining the diagonal preconditioning ( eq . [ pwprec ] ) with a fgmres convergence accelerator instead of using it within a steepest descent minimization gives a significantly faster convergence .the number of iterations can nearly be cut into half as shown in fig [ sd_pw ] . up to nowwe have only considered the case where the elements of the matrix representing the laplacian were calculated within the same wavelet family that was used to analyze the residue by wavelet transformations to do the preconditioning step .more general schemes can however be implemented .it is not even necessary that the calculation of the laplacian matrix elements is done in a wavelet basis .one can instead use simple second order finite differences , which in the one - dimensional case are given by or some higher order finite differences for the calculation of the matrix elements .the scaling relation eq .[ factor4 ] does not any more hold exactly , but it is fulfilled approximately and the schemes works as well as in the pure wavelet case as is shown in fig .[ pw_fd ] .( -5.,-1.5 )the aim of this part of the article is twofold .one aspect is how to speed up the convergence of the solution process for poisson s equation expressed in a wavelet basis set compared to the diagonal preconditioning approach .the other aspect is how to accelerate multigrid schemes by incorporating wavelet concepts .the part therefore begins with a brief review of the multigrid method .[ mgv ] schematically shows the algorithm of a standard multigrid v cycle .even though the scheme is valid in any dimension , a two dimensional setting is suggested by the figure , since the data is represented as squares . sinceless data is available on the coarse grids , the squares holding the coarse grid data are increasingly smaller .it has to be stressed that the remarks of the end of the first part remain valid and that in particular all the numerical calculations are 3-dim calculations .the upper half of the figure shows the first part of the v cycle where one goes from the finest grid to the coarsest grid and the lower half the second part where one goes back to the finest grid .( -8.,-9.5 ) in the first part of the v cycle the potential on all hierarchic grids is improved by a standard red - black gauss - seidel relaxation denoted by gs .the gs relaxation reduces the error components of wavelengths that are comparable to the grid spacing very efficiently . in the 3-dimensional case we are considering here , the smoothing factor is .445 ( page 74 of ref ) .since we use 2 gs relaxations roughly one quarter of the error around the wavelength survives the relaxations on each level . as a consequence the residue contains mainly longer wavelengths which then in turn are again efficiently eliminated by the gs relaxations on the coarser grids . nevertheless , the remaining quarter of the shorter wavelengths surviving the relaxations on the finer grid pollutes the residue on the coarser grid through aliasing effects .aliasing pollution means that even if the residue on the finer grid would contain only wavelengths around ( and in particular no wavelength around ) the restricted quantity would not be identically zero . in the second part of the v cyclethe solutions obtained by relaxation on the coarse grid are prolongated to the fine grids and added to the existing solutions on each level .aliasing pollution is again present in the prolongation procedure . due to the accumulated aliasing errors 2 gs relaxations are again done on each level before proceeding to the next finer level . to a first approximation the different representations of at the top of fig .[ mgv ] represent fourier filtered versions of the real space data set on the finest grid .the large data set contains all the fourier components , while the smaller data sets contain only lower and lower frequency parts of . in the 1-dimensional case only the lower half of the spectrum is still dominating when going to the coarse grid , in the 3-dimensional case it is only one eight of the spectrum . because of the various aliasing errors described above the fourier decomposition is however not perfectobviously it would be desirable to make this fourier decomposition as perfect as possible . in the absence of aliasing errors, the gs relaxations would not have to deal with any fourier components spilling over from higher or lower resolution grids .let us now postulate ideal restriction and prolongation operators and discuss their properties .as follows from the discussion above , they should provide for a dyadic decomposition of the fourier space .consequently the restriction operator would have to be a perfect low pass filter for the lower half of the spectrum ( in the 1-dim case , in the 3-dim case only 1/8 would survive ) .we will refer to this property in following as frequency separation property .the degree of perfectness can be quantified by the dependent function where is the number of grid points on resolution level .the dependence enters through the requirement that the signal on the finest resolution level is a pure harmonic , the function for a perfect restriction operator is shown in fig [ f_ideal ] .such ideal grid transfer operators have to satisfy a second property , that will be baptized the identity property .the prolongation operator has to bring back exactly onto the finer grid the long wavelength associated with the coarser grid .this can only be true if prolongation followed by a restriction gives the identity .a third desirable property would be that the coarse charge density represents as faithfully as possible the significant features of the original charge density .in particular the coarse charge density should have the same multipoles and most importantly the same monopole .the conservation of the monopole just means that the total charge is identical on all grid levels .this third property will be called the multipole conservation property in the following .( -4.5,-1.5 ) with the ideal grid transfer operators , the solution of poisson s equation would be a divide and conquer approach in fourier space and it could be done with a single v cycle with a moderate number of gs relaxations on each resolution level .in contrast to a solution in a plane wave basis the division would not be into single fourier components but into dyadic parts of the fourier spectrum .for the case of our postulated ideal grid transfer operators it also would not matter whether the gs relaxations are applied when going up or going down , only the total number of gs relaxations on each level would count . to establish the relation between multigrid grid transfer operators and wavelet theory ,let us point out a formal similarity . for vanishing coefficients ,the wavelet analysis step is given by ( eq .26 of ref . ) and is formally identical to a restriction operation .a wavelet synthesis step for the coefficients is given by ( formula 27 of ref . ) and is formally identical to a prolongation operation .one can now for example easily see that the injection scheme for the restriction and linear interpolation for the prolongation part corresponds to a wavelet analysis and synthesis steps for 1st order interpolating wavelets. using the values of the filters and for interpolating wavelets we obtain and which is the standard injection and interpolation . as a consequence of the fact that it can be considered as a wavelet forward and backward transformation ,the combination of injection and interpolation satisfy the identity property of our ideal grid transfer operator pair , namely that applying a restriction onto a prolongation gives the identity .usually injection is replaced by the full weighting scheme , this scheme has the advantage that it conserves averages , i.e it satisfies the monopole part of the multipole conservation property of an ideal restriction operator . applying it to a charge densitythus ensures that the total charge is the same on any grid level .trying to put the full weighting scheme into the wavelet theory framework gives a filter with nonzero values of , , this filter does not satisfy the orthogonality relations of wavelet theory ( formula 8 of ref . ) with the filter corresponding to linear interpolation .hence a prolongation followed by a restriction does not give the identity .a pair of similar restriction and prolongation operators that conserve averages can however be derived from wavelet theory .instead of using interpolating wavelets we have to use lifted interpolating wavelets . in this waywe can obtain both properties , average conservation and the identity for a prolongation restriction sequence . using the filters derived in ref . we obtain let us finally discuss the first property of our postulated ideal restriction operator , namely that it is a perfect low pass filter . obviously any finite length filter can only be an approximate perfect low pass filter .fig [ fan ] shows the function for several grid transfer operators .one clearly sees that the full weighting operator is a poor low pass filter , the filter derived from second degree lifted wavelets is already better and the filters obtained from 10th degree daubechies wavelets and twofold lifted 6th order interpolating wavelets are best .the daubechies 6th degree filter is intermediate and of nearly identical quality as the one that is a mixture of the full weighting and second degree lifted wavelet filters .in contrast to the former , the later does however not fulfill the identity property .( 5.,18 . )( -7.5,11 . )( 1.5,11 . )( -7.5,5 . )( 1.5,5 . )( -7.5,-1 . )( 1.5,-1 . )the degree of perfectness for the frequency separation is also related to the multipole conservation property of our postulated ideal grid transfer operators .as one sees from fig .[ fan ] filters which correspond to wavelet families with many vanishing are closer to being ideal for frequency separation than those with few vanishing moments . at the same timethe number of vanishing moments determines how many multipoles are conserved when the charge density is brought onto the coarser grids .the right panel of fig .[ comp ] shows the convergence rate of a sequence of v cycles for the full weighting / interpolation ( eq . [ fullweight],[interpol ] ) scheme and various wavelet based schemes , namely the scheme obtained from second order lifted wavelets ( eq . [ myrestriction],[myprolongation ] ) , the corresponding scheme , but obtained from twofold lifted 6-th order wavelets as well as schemes obtained from 6th and 10th order daubechies wavelets .the numerical values for the filters are listed in the appendix .one can observe a clear correlation between the convergence rate and the the degree of perfectness of the function . a high degree of perfectnessis particularly useful in connection with high order discretizations of the laplacian .most of the filters of the grid transfer operators are longer than the standard full weighting filter , which just has 3 elements .the lifted 2nd order interpolating wavelet restriction filter has for instance 5 elements and the 6-th degree daubechies filter 6 elements .this does however not lead to a substantial increase of the cpu time .this comes from the fact that on modern computers the transfer of the data into the cache is the most time consuming part .how many numerical operations are then performed on these data residing in cache has only a minor influence on the timing .the new wavelet based schemes for restriction and prolongation are therefore more efficient than the full weighting scheme , both for finite difference discretizations and scaling function basis sets .it is also obvious that the multigrid approach for scaling / wavelet function basis sets is more efficient than the diagonal preconditioning approach .the identity property for a restriction prolongation operator pair was only necessary for the case of operators where the restriction part is a perfect low pass filter .one might therefore wonder how useful it is in the context of the only nearly perfect filters .the numerical experience suggests that it is nevertheless a useful property .one can for example compare the convergence rates using either the 6-th order daubechies filters or the filter that is the average of full weighting and lifted 2nd order wavelet filters .[ fan ] shows that their restriction parts have very similar functions .nevertheless we always found a better convergence rate with the daubechies filter which satisfies the identity property .for the 2-dimensional poisson equation it has been shown , that the convergence rate compared to the standard full weighting scheme can be improved by tailoring grid transfer operators for the relaxation scheme used .the theoretical foundations for this is furnished by local fourier analysis .the same approach could certainly also be used for the 3-dimensional case considered here .it is to be expected that the grid transfer operators found by such an optimization would be very close to the ones that we have obtained from wavelet theory .( 5.,18 . )( -7.5,11 . )( 1.5,11 . )( -7.5,5 . )( 1.5,5 . )( -7.5,-1 . )( 1.5,-1 . ) the main justification for the relaxations in the upper part of the traditional multigrid algorithm shown in fig .[ mgv ] is to eliminate the high frequencies .this can however be done directly by fast wavelet transformations based on wavelets that have good localization properties in frequency space such as lifted interpolating wavelets . as a consequencethe traditional multigrid algorithms can be simplified considerably as shown in fig .using wavelet based restriction and prolongation operators we can completely eliminate the gs relaxation in the first part of the v cycle where we go from the fine grid to the coarsest grid .we baptize such a simplified v cycle a halfway v cycle .the numerical results , obtained with the halfway v cycle , shown in the right hand plots of fig . [ comp ] , demonstrate that the convergence is slightly faster than for the traditional multigrid algorithm based on the same restriction and prolongation scheme .in addition one step is faster .it is not necessary to calculate the residue after the gs relaxations .otherwise the number of gs relaxations and restrictions / prolongations is identical in the full and halfway v cycle .on purpose no cpu times are given in this context because optimization of certain routines can entirely change these timings . because the residue is never calculated in the halfway v cycle , the memory requirements are also reduced .( 5.,5.5 ) ( -8.,-11 . ) the number of gs relaxations in the halfway v cycle was chosen to be 4 in order to allow for an unbiased comparison with the traditional v cycle where also 4 gs relaxations were done on the finest grid level . for optimal overall efficiency putting the number of gs relaxation to 3 is usually best , with the values of 2 and 4 leading to a modest increase in the computing time .the convergence rate of halfway v cycles as a function of the number of gs relaxations on the finest grid level is shown in fig [ num_gs ] .( -7.5,-1.5 ) ( 1.5,-1.5 ) in all the previous examples we specified the number of gs relaxations on the finest grid level . on the coarser grid levelsthe number of iterations was allowed to increase by a factor of two per grid level . in this way it was practically always possible to find the exact solution on the most coarse grid .in addition we found that this trick slightly reduces the number of iterations and the total cpu time .the overall behavior of all the different methods is however identical when the number of gs relaxation is constant on each grid level .our results demonstrate that halfway v cycles with the restriction and prolongation steps based on wavelet theory are the most efficient approach for the solution of the 3-dimensional poisson s equation .it is most efficient both for finite difference discretizations and for the case where scaling functions or wavelets are used as basis functions .we expect that the approach should also be the most efficient one in connection with finite elements .it is essential that the wavelet family used for the derivation of the restriction and prolongation schemes has at least one vanishing moment and conserves thus average quantities on the various grid levels .wavelet families with more vanishing moments do not lead an appreciable increase of the convergence rate compared to the case of one vanishing moment for low order discretizations of poisson s equation , but lead a modest further increase for high order discretizations . in the case where a wavelet family was used to discretize the laplace operator , it is best to use the same wavelet family to construct the grid transfer operators .in addition to increased efficiency of the proposed halfway v cycle in terms of the cpu time , it is also simpler than the standard v cycle .this makes not only programming easier , but also reduces the memory requirements .i was fortunate to have several discussion with achi brandt about this work .his great insight on multigrid methods , that he was willing to share with me , helped a lot to improve the manuscript .i thank him very much for his interest and advice .filter for twofold lifted 6th order interpolating wavelets : =75/128,=-25/256,=3/256 , =2721/4096 , =9/32 , =-243/2048 , =-1/32 , =87/2048 , =-13/2048 , =3/8192 .the values for negative indices follow from the symmetry and .filters for 6-th order daubechies wavelets : =0.3326705529500826159985d0 , =0.8068915093110925764944d0 , =0.4598775021184915700951d0 , =-0.1350110200102545886963d0 , =-0.0854412738820266616928d0 , = 0.0352262918857095366027d0 . . filters for 10-th order extremal daubechies wavelets : =.1601023979741929d0 , =.6038292697971897d0 , =.7243085284377729d0 , =.1384281459013207d0 , =-.2422948870663820d0 , =-.0322448695846384d0 , =.0775714938400457d0 , = -.0062414902127983d0 , =-.0125807519990820d0 , =.0033357252854738d0 . .a. brandt , multiscale scientific computation : review 2001 , in t. j. barth , t. f. chan and r. haimes ( eds . ) _ multiscale and multiresolution methods : theory and applications _springer verlag , heidelberg , 2001 ; or http://www.wisdom.weizmann.ac.il/ achi/ w. dahmen , s. pr , r. schneider : _ wavelet approximation methods for pseudo - differential equations ii : matrix compression and fast solution _ , advances in computationnal mathematics,*1 * , ( 1993 )
|
it is shown how various ideas that are well established for the solution of poisson s equation using plane wave and multigrid methods can be combined with wavelet concepts . the combination of wavelet concepts and multigrid techniques turns out to be particularly fruitful . we propose a modified multigrid v cycle scheme that is not only much simpler , but also more efficient than the standard v cycle . whereas in the traditional v cycle the residue is passed to the coarser grid levels , this new scheme does not require the calculation of a residue . instead it works with copies of the charge density on the different grid levels that were obtained from the underlying charge density on the finest grid by wavelet transformations . this scheme is not limited to the pure wavelet setting , where it is faster than the preconditioned conjugate gradient method , but equally well applicable for finite difference discretizations .
|
let denote a sequence of independent , identically distributed ( iid ) light tailed ( their moment generating function is finite in a neighborhood of zero ) non - lattice ( modulus of their characteristic function is strictly less than one ) random vectors taking values in , for .in this paper we consider the problem of efficient simulation estimation of the probability density function of at points away from , and the tail probability for sets that do not contain and essentially are affine transformations of the non - negative orthant of .we develop an efficient simulation estimation methodology for these rare quantities that exploits the well known saddle point representations for the probability density function of obtained from fourier inversion of the characteristic function of ( see e.g. , , and ) .furthermore , using parseval s relation , similar representations for are easily developed . to illustrate the broader applicability of the proposed methodology, we also develop similar representation for in a single dimension setting , for , and using it develop an efficient simulation methodology for this quantity as well .the problem of efficient simulation estimation of the tail probability density function has not been studied in the literature , although , from practical viewpoint its clear that visual inspection of shape of such density functions provides a great deal of insight into the tail behavior of the sums of random variables .another potential application maybe in the maximum likelihood framework for parameter estimation where closed form expressions for density functions of observed outputs are not available , but simulation based estimators provide an accurate proxy .the problem of efficiently estimating via importance sampling , besides being of independent importance , may also be considered a building block for more complex problems involving many streams of i.i.d .random variables ( see e.g. , , for a queuing application ; for applications in credit risk modeling ) . this problem has been extensively studied in rare event simulation literature ( see e.g. , , , , , , ) .essentially , the literature exploits the fact that the zero variance importance sampling estimator for , though unimplementable , has a markovian representation .this representation may be exploited to come up with provably efficient , implementable approximations ( see and ) .sadowsky and bucklew in ( also see ) developed exponential twisting based importance sampling algorithms to arrive at unbiased estimators for that they proved were asymptotically or weakly efficient ( as per the current standard terminology in rare event simulation literature , see e.g. , and for an introduction to rare event simulation .popular efficiency criteria for rare event estimators are also discussed later in section [ popular : measure ] ) .the importance sampling algorithms proposed by were state independent in that each was generated from a distribution independent of the previously generated .blanchet , leder and glynn in also considered the problem of estimating where they introduced state dependent , exponential twisting based importance sampling distributions ( the distribution of generated depended on the previously generated ) .they showed that , when done correctly , such an algorithm is strongly efficient , or equivalently has the bounded relative error property .the problem of efficient estimation of the expected overshoot ] , where is another probability measure such that is absolutely continuous w.r.t . , with denoting the associated radon - nikodym derivative , or the likelihood ratio , and is the expectation operator under . the importance sampling unbiased estimator of is obtained by taking an average of generated iid samples of under . note that by setting the simulation output is almost surely , signifying that such a provides a zero variance estimator for .note that the relative width of the confidence interval obtained using the central limit theorem approximation is proportional to the ratio of the standard deviation of the estimator divided by its mean .therefore , the latter is a good measure of efficiency of the estimator .note that under naive simulation , when ( for any set , denotes its indicator ) , the standard deviation of each sample of simulation output equals that when divided by , the ratio increases to infinity as . below we list some criteria that are popular in evaluating the efficacy of the proposed importance sampling estimator ( see ) . here, denotes the variance of the estimator under the appropriate importance sampling measure .a given sequence of estimators for quantities is said * to be _ weakly efficient _ or _ asymptotically efficient _ if for all ; * to be _ strongly efficient _ or to have _ bounded relative error _ if * to have _ asymptotically vanishing relative error _ if recall that denote a sequence of independent , identically distributed light tailed random vectors taking values in .let denote the components of , each taking value in .let denote the distribution function of .denote the moment generating function of by , so that =e[e^{\theta_1x_1 ^ 1+\theta_2x_1 ^ 2+\cdots+\theta_dx_1^d}],\ ] ] where and for the euclidean inner product between them is denoted by the characteristic function ( cf ) of is given by =e[e^{\iota(\theta_1x_1 ^ 1+\theta_2x_1 ^ 2+\cdots+\theta_dx_1^d)}]\ ] ] where . in this paperwe assume that the distribution of is non - lattice , which means that for all .let denote the cumulant generating function ( cgf ) of .we define to be the effective domain of , that is throughout this article we assume that , the interior of .the large deviations rate function ( see e.g. , ) associated with is defined as this can be seen to equal whenever there exists such that .( here , denotes the gradient of ) .now consider the problem of estimating .let denote the exponentially twisted distribution associated with when the twisting parameter equals .let denote the .furthermore , let solve the equation . under the assumption that such a exists , propose an importance sampling measure under which each is iid with the new distribution function .then , they prove that under this importance sampling measure , when is convex , the resulting estimator of is weakly efficient .see and for a sense in which this distribution approximates the zero variance estimator for .since , , it is easy to see that under the exponentially twisted distribution , each has mean .as mentioned in the introduction , consider a variant importance sampling measure where the distribution of depends on the generated .modulo some boundary conditions , they choose an exponentially twisted distribution to generate so that its mean under the new distribution equals .they prove that the resulting estimator is strongly efficient under the restriction that is convex and has a twice continuously differentiable boundary .later in section 5 , we compare the performance of the proposed algorithm to the one based on exponential twisting developed by as well as with that proposed by .in this section we first develop a saddle point based representation for the probability density function ( pdf ) of in proposition [ rep : density ] ( see e.g. , , and ) .we then develop an approximation to the zero variance estimator for this pdf .our main result is theorem [ mainthm0 ] , where we prove that the proposed estimator has an asymptotically vanishing relative error .some notation is needed in our analysis .let denote the euclidean norm of by . for a square matrix , will denote the determinant of , while norm of is denoted by let denote the hessian of for , this is strictly positive definite , let be the inverse of the unique square root of .[ rep : density ] suppose is strictly positive definite for some . furthermore, suppose that is integrable for some .then , the density function of , exists for all and its value at any point is given by : }{\sqrt{\text{det}(\lambda''(\theta))}}\int_{v\in \re^d}\psi(n^{-\frac{1}{2}}a(\theta)v,\theta , n)\times\phi(v)\,dv,\ ] ] where \ ] ] and \label{inversion : formula}\\ & = & \left(\frac{1}{2\pi}\right)^d\int_{t\in\re^d}m^n\left(\frac{\iota t}{n}\right)e^{-\iota ( t\cdot x_0)}\,d t\,\,\,\,[m_{\bar{x}_n}\,\,\text{written in terms of}\,\,m]\nonumber\\ & = & \left(\frac{n}{2\pi}\right)^d\int_{s\in\re^d}m^n(\iota s)e^{-n\iota ( s\cdot x_0)}\,d s\,\,\,\,[\text{substituting}\,\ , s=\frac{t}{n}]\nonumber\\ & = & \left(\frac{n}{2\pi\iota}\right)^d\int_{\theta_1-\iota\infty}^{\theta_1+\iota\infty}\int_{\theta_2-\iota\infty}^{\theta_2+\iota\infty}\cdots\int_{\theta_d-\iota\infty}^{\theta_d+\iota\infty}e^{n[\lambda(s)-s\cdot x_0]}\,ds_1ds_2\cdots ds_d\label{cauchy}\\ & = & \left(\frac{n}{2\pi\iota}\right)^d\int_{y\in\re^d}\exp\left[n\left\{\lambda(\theta+\iota y)-(\theta+\iota y)\cdot x_0\right\}\right]\,(\iota)^d d y\nonumber\\ & = & \left(\frac{n}{2\pi}\right)^d\exp\left[n\left\{\lambda(\theta)-\theta\cdot x_0\right\}\right]\int_{y\in\re^d}\psi(y,\theta , n)\times \exp\left\{-n\frac{1}{2}y^t\lambda''(\theta)y\right\}\,dy\nonumber\\ & = & \left(\frac{n}{2\pi}\right)^{\frac{d}{2}}\exp\left[n\left\{\lambda(\theta)-\theta\cdot x_0\right\}\right]\int_{w\in\re^d } \psi(n^{-\frac{1}{2}}w,\theta , n)\times\phi(a(\theta)^{-1}w)\,dw\label{w_integral}\\ & = & \left(\frac{n}{2\pi}\right)^{\frac{d}{2}}\frac{\exp\left[n\left\{\lambda(\theta)-\theta\cdot x_0\right\}\right]}{\sqrt{\text{det}(\lambda''(\theta))}}\int_{v\in\re^d}\psi(n^{-\frac{1}{2}}a(\theta)v,\theta , n)\times\phi(v)\,dv\,,\label{v_integral } \end{aligned}\ ] ] where the equality in ( [ inversion : formula ] ) , which holds for all , is the inversion formula applied to the characteristic function of ( see e.g , ) .the assumption that is integrable ensures that , which is the characteristic function of , is an integrable function of for all .the equality in ( [ cauchy ] ) holds , by cauchy s theorem , for any in the interior of .the substitution gives ( [ w_integral ] ) , while ( [ v_integral ] ) follows from ( [ w_integral ] ) by the substitution .for a given , suppose that the solution to the equation exists and . then , the expansion of the integral in ( [ density_estimate ] ) is available .for example , the following is well - known : [ asymptotic1 ] suppose is strictly positive definite and is integrable for some .then , a proof of proposition [ asymptotic1 ] can be found in ( see also ) . for completenesswe include a proof in the appendix .it is also useful in following proof of proposition [ asymptotic3 ] .the proof uses the estimates ( [ estimate1 ] ) , ( [ estimate2 ] ) , ( [ estimate3 ] ) and lemma [ keylemma ] developed later in this section .the integral in ( [ density_estimate ] ) may be estimated via monte carlo simulation .in particular , this integral may be re - expressed as where is a density supported on .now if are iid with distribution given by the density , then }{\sqrt{\text{det}(\lambda''(\theta^*))}}\frac{1}{n}\sum_{i=1}^n\frac{\psi(n^{-\frac{1}{2}}a(\theta^*)v_i,\theta^*,n)\phi(v_i)}{g(v_i)}\ ] ] is an unbiased estimator for .note that to get a zero variance estimator for the above integral we need we now argue that for all .we may then select an is density that is asymptotically similar to for . in the further tails , we allow to have fatter power law tails .this ensures that large values of in the simulation do not contribute substantially to the variance .further analysis is needed to see ( [ approx_small ] ) .note from the definition of , that for all , while for the saddle point .here , and are the first , second and third derivatives of w.r.t . , with held fixed .note that while and are -dimensional vector and matrix respectively , is the array of numbers : . the following notation aids in dealing with such quantities :if is a array of numbers and is a -dimensional vector and is a matrix then we use the notation and where following identity is evident : since , it follows from the three term taylor series expansion and ( [ property1],[property2 ] ) above , that continuity of in the neighborhood of implies ( [ approx_small ] ) .we now define the form of the is density .we first show its parametric structure and then specify how the parameters are chosen to achieve asymptotically vanishing relative error .for , , and , set note that if we put where is the incomplete gamma integral ( or the gamma distribution function , see e.g , ) , then provided . ,scaledwidth=50.0% ] the following assumption is important for coming up with the parameters of the proposed is density .[ main : assumption ] there exist and such that by riemann - lebesgue lemma , if the probability distribution of is given by a density function , then as . assumption [ main : assumption ] is easily seen to hold when decays as a power law as .this is true , for example , for gamma distributed random variables .more generally , this holds when the underlying density has integrable higher derivatives ( see ) : if -th order derivative of the underlying density is integrable then for any , assumption [ main : assumption ] holds with . to specify the parameters of the is density we need further analysis .define = e^{-\iota u\cdot x_0}\frac{m\left(\theta+\iota u\right)}{m(\theta)}\,\,,\ ] ] where denotes the expectation operator under the distribution .let then , , is continuous , non - decreasing and as .further , since is the characteristic function of a non - lattice distribution , if .we define then for any we have and as .let be any sequence with following three properties : 1 . as 2 .for any positive , as 3 . as later in section 5 we discuss how such a sequence may be selected in practice .set .then , it follows that if then .equivalently , for all .let and denote the minimum and maximum eigenvalue of , respectively .hence is the maximum eigenvalue of .therefore , we have next , put .then , and implies .also let so that implies .now we are in position to specify the parameters for the proposed is density .set and let . for to be a valid density function , we need . since ,select to be a sequence of positive real numbers that converge to 1 in such a way that and }=0.\ ] ] for example , for any satisfies ( [ lim999 ] ) . for each , let denote the pdf of the form ( [ isform ] ) with parameters , and chosen as above .let and denote the expectation and variance , respectively , w.r.t .the density .[ mainthm0 ] suppose assumption [ main : assumption ] holds and . then , =\int_{v\in\re^d}\frac{\psi^2(n^{-\frac{1}{2}}a(\theta^*)v,\theta^*,n)\phi^2(v)}{g_n(v)}\,dv=1+o(n^{-\frac{1}{2}})\,.\ ] ] consequently , from proposition [ asymptotic1 ] , it follows that \rightarrow 0\,\,\,\text{as}\,\,n\rightarrow \infty\,,\ ] ] so that the proposed estimators for have an asymptotically vanishing relative error .we will use the following lemma from .[ keylemma ] for any , also note that from the definitions of and it follows that , for any , is a characteristic function . to see this ,observe that ^n\\ & = & \left(e_{\theta}\left[e^{\iota n^{-\frac{1}{2}}a(\theta)v\cdot ( x_1- x_0)}\right]\right)^n\\ & = & \left[\varphi_{\theta}\left(n^{-\frac{1}{2}}a(\theta)v\right)\right]^n .\ ] ] some more observations are useful for proving theorem [ mainthm0 ] .since is continuous , it follows from the three term taylor series expansion , ( where is between and the origin ) and ( [ property1 ] ) and ( [ property2 ] ) above that there exists a sequence of positive numbers converging to zero so that or equivalently furthermore , for sufficiently large , and for all .we shall assume that is sufficiently large so that ( [ estimate222 ] ) and ( [ estimate322 ] ) hold in the remaining analysis .( * theorem [ mainthm0 ] * ) + we write where and from ( [ isform ] ) we get and for any , put by triangle inequalitywe have since as we have and , the second term in rhs converges to zero .writing , for the first term we have we apply lemma ( [ keylemma ] ) with since , where is a homogeneous polynomial whose coefficients does not dependent on , and implies , we have from ( [ estimate322 ] ) , ( [ estimate222 ] ) and ( [ estimate122 ] ) , respectively and from lemma [ keylemma ] , it now follows that the integrand in the last integral is dominated by therefore we have .also where is a constant independent of . by assumption [ main :assumption ] , the above integral over is finite. for large we also have by choice of we can conclude that as , proving theorem [ mainthm0 ] .in this section we consider the problem of efficient estimation of for sets that are affine transformations of the non - negative orthants along with some minor variations . as in ( ) , dominating point of the set plays a crucial role in our analysis .as is well known , a point is called a dominating point of if uniquely satisfies the following properties ( see e.g , , ) : 1 . is in the boundary of .2 . there exists a unique with .3 . .as is apparent from ( , , ) , in many cases a general set may be partitioned into finitely many sets each having its own dominating point . from simulation viewpoint , one way to estimate then is to estimate each separately with an appropriate algorithm . in the remaining paper ,we assume the existence of a dominating point for .our estimation relies on a saddle - point representation of obtained using parseval s relation .let and where is an arbitrarily chosen point in .let be the density function of when each has distribution function , where , recall that an exact expression for the tail probability is given by : = p[y_n\in\mathcal{a}_{n , x_0 } ] = e^{-n\{\theta\cdot x_0 - \lambda(\theta)\}}\int_{y\in\mathcal{a}_{n , x_0}}e^{-\sqrt{n}(\theta\cdot y)}h_{n,\theta , x_0}(y)\,dy\,\end{aligned}\ ] ] which holds for any and any .the representation ( [ tail : generalized ] ) is not very useful without further restriction on and ( see e.g. , ) .again , assuming that a solution to exists , where is the dominating point of , define we need the following assumption : [ cn : finite ] , .since is a dominating point of , for any , we have . hence, if is a set with finite lebesgue measure then is finite .assumption [ cn : finite ] may hold even when has infinite lebesgue measure , as example [ positive : orthant ] below illustrates .when assumption [ cn : finite ] holds , we can rewrite the right hand side of ( [ tail : generalized ] ) as where is a density in .let denote the complex conjugate of the characteristic function of . since the characteristic function of equals ^n,\ ] ] by parseval s relation , ( [ pre : parseval ] )is equal to ^n\,dt.\ ] ] this in turn , by the change of variable and rearrangement of terms , equals we need another assumption to facilitate analysis : [ limit : chfn ] for all , [ asymptotic3 ] suppose has a dominating point , the associated and is strictly positive definite .further , assumptions [ cn : finite ] and [ limit : chfn ] hold. then , \sim \left(\frac{1}{2\pi}\right)^{\frac{d}{2 } } \frac{c(n,\theta^*,x_0 ) e^{-n\{\theta^*\cdot x_0 - \lambda(\theta^*)\}}}{\sqrt{\operatorname{det}(\lambda^{\prime\prime}(\theta^{*}))}},\ ] ] or , equivalently by ( [ final ] ) proof of proposition [ asymptotic3 ] is omitted .it follows along the line of proof of proposition [ asymptotic1 ] and from noting that : let be any density supported on . if are iid with distribution given by density , then the unbiased estimator for ] and has an asymptotically vanishing relative error .[ positive : orthant_low_dim ] for , let suppose we want to estimate ] then and therefore , it follows that assumption [ limit : chfn ] holds for .also note that , since the last integral converges to zero , it follows that assumption [ limit : chfn ] holds for .similar analysis carries over to sets as illustrated by figure [ subfig5 ] under the conditions as in example 3 .in example 1 we assumed that . in many setting , this may not be true but the problem can be easily transformed to be amenable to the proposed algorithms .we illustrate this through the following example .essentially , in many cases where such a does not exist , the problem can be transformed to a finite collection of subproblems , each of which may then be solved using the proposed methods .let be a sequence of independent rv s with distribution same as , where and are standard normal rvs with correlation .suppose , that is .solving we get thus , if we have both and positive , and we are in situation of example [ positive : orthant ] .suppose so that . then making the change of variable we have =p[\bar{z_1}\geq a ] - p[\bar{z_1}\geq a , \bar{z_3 } \geq -b].\ ] ] now for estimating the second probability we have both and positive .similarly , the first probability is easily estimated using the proposed algorithm . however , note that if lies on we have one of or zero , and consequently is infinite .the proposed algorithms may need to be modified to handle such situations , however its not clear if simple adjustment to our algorithm will result in the asymptotically vanishing relative error property .we further discuss restrictions to our approach in section 6 .the methodology developed previously to estimate the tail probability can be extended to estimate for .we illustrate this in a single dimension setting ( ) for , and for .let . in finance and in insurance oneis often interested in estimating , which is known as the expected overshoot or the peak over threshold .as we have an efficient estimator for , the problem of efficiently estimating is equivalent to that of efficiently estimating .note that where . using ( [ tail : generalized ] )we get where recall that is a solution to and is the density of when each has distribution .define hence , , .the right hand side of may be re - expressed as where , is a density in .let denote the complex conjugate of the characteristic function of .by simple calculations , it follows that and then , repeating the analysis for the tail probability , analogously to ( [ final ] ) , we see that equals as in proposition [ asymptotic3 ] , we can see that so that using analysis identical to that in theorem [ mainthm2 ] , it follows that the resulting unbiased estimator of ( when density is used ) has an asymptotically vanishing relative error . the above analysis can be easily extended to prove similar results for the case of and a vector of positive integers .to implement the proposed method , the user must first specify the parameters of the is density appropriately . in this subsectionwe indicate how this may be done in practice .all the user needs is to identify a sequence satisfying the three properties listed in subsection [ proposedis ] . once is specified , arriving at appropriate , , and is straightforward ( see discussion before theorem [ mainthm0 ] ; finding , and are one time computations and can be efficiently done using matlab or mathematica ) .clearly for any , satisfies properties 1 and 2 . to see that property 3 also holds , note that where is the symmetrization of ( if is the distribution function of random vector then symmetrization of , denoted , is the distribution function of the random vector , where has same distribution as ) . since it follows that there exist a neighborhood of origin and positive constants and , such that for all .this in turn implies that there is a neighborhood of zero and positive constants and such that and for all .therefore for any .one may choose close to so that grows slowly .then , since , can be taken approximately a constant over a specified range of variation of .also since is what one uses for simulating from , and , in practice for reasonable values of , one may take as a constant close to 1 . in our numerical experimentbelow , parameters for are chosen using these simple guidelines .we first use the proposed method to estimate the probability density function of for the case where sequence of random variables are independent and identically exponentially distributed with mean 1. then the sum has a known gamma density function facilitating comparison of the estimated value to the true value .the density function estimates using the proposed method ( referred to as sp - is method ) are evaluated for , and ( the algorithm performance was observed to be relatively insensitive to small perturbations in these values ) based on generated samples .table [ table : tabdense ] shows the comparison of our method with the conditional monte carlo ( cmc ) method proposed in asmussen and glynn ( 2008 ) ( pg .145 - 146 ) for estimating the density function of at a few values .as discussed in asmussen and glynn ( 2008 ) , the cmc estimates are given by an average of independent samples of , where is generated by sampling using their original density function .figure [ figure : denseis ] shows this comparison graphically over a wider range of density function values .as may be expected , the proposed method provides an estimator with much smaller variance compared to the cmc method . .true density function and its estimates using the proposed (sp - is ) method and the conditional monte carlo ( cmc ) for an average of 30 independent exponentially distributed mean = 1 random variables . for and ,the number of generated samples in both the methods , and for , . [ cols="^,^,^,^,^,^ " , ] in this example , the true value of tail probability for different values of is calculated using approximation of gamma density function available in matlab . variance reduction achieved by sp - is method over bgl method is reported .this increases with increasing .in addition , we note that the computation time per sample for bgl method increases with whereas it remains constant for the sp - is method .table [ table : tab1d ] shows that the exact asymptotic can differ significantly from the estimated value of the probability .as shown in table [ table : taboet ] , this difference can be far more significant in multi - dimension settings , thus emphasizing the need for simulation despite the existence of asymptotics for the rare quantities considered .in this paper we considered the rare event problem of efficient estimation of the density function of the average of iid light tailed random vectors evaluated away from their mean , and the tail probability that this average takes a large deviation . in a single dimension setting we also considered the estimation problem of expected overshoot associated with a sum of iid random variables taking a large deviations .we used the well known saddle point representations for these performance measures and applied importance sampling to develop provably efficient unbiased estimation algorithms that significantly improve upon the performance of the existing algorithms in literature and are simple to implement . in this paperwe combined rare event simulation with the classical theory of saddle point based approximations for tail events .we hope that this approach spurs research towards efficient estimation of much richer class of rare event problems where saddle point approximations are well known or are easily developed .another direction that is important for further research involves relaxing assumptions [ cn : finite ] or [ limit : chfn ] in our analysis .then , our is estimators may not have asymptotically vanishing relative error but may have bounded relative error .we illustrate this briefly through a simple example below .note that many intricate asymptotics developed by iltis for estimating $ ] correspond to cases where assumptions [ cn : finite ] or [ limit : chfn ] may not hold .let be a sequence of independent rv s with distribution same as , where and are uncorrelated standard normal rvs .suppose for some ( see figure [ fig0 ] ) ..,width=226 ] as we choose the point which is clearly the dominating point of the set .now for any and it can be shown that solving gives and .also therefore assumption [ limit : chfn ] fails to hold : therefore , in this case the the family of estimator given by ( [ tail : estimator ] ) may not have asymptotically vanishing relative error .but , nevertheless , it can be shown to have bounded relative error . to see this , note that and ( here for all .so . )also therefore as in proposition [ asymptotic3 ] , it follows that \sim \frac{e^{-\frac{na^2}{2}}}{2\sqrt{\pi}\sqrt{n}a^{\frac{3}{2}}}\times\left(1 + \frac{1}{2a}\right)^{-\frac{1}{2}}.\ ] ] mimicking the proof of theorem ( [ mainthm2 ] ) it can be established that \right]\rightarrow \frac{1 + \frac{1}{2a}}{\sqrt{1 + \frac{1}{a}}}-1.\ ] ]( of proposition [ asymptotic1 ] ) + let .we have where since is continuous , it follows from the three term taylor series expansion , ( where is between and the origin ) , ( [ property1 ] ) and ( [ property2 ] ) that for any given we can choose small enough so that or equivalently and sufficiently small , we choose so that ( [ estimate2 ] ) and ( [ estimate3 ] ) also hold for . we apply lemma ( [ keylemma ] ) with since , where is a homogeneous polynomial with coefficients independent of and for we have from ( [ estimate3 ] ) , ( [ estimate2 ] ) and ( [ estimate1 ] ) , respectively , ( of theorem [ mainthm2 ] ) + the proof follows along the same line as proof of theorem [ mainthm0 ] .we write where now as in the case of theorem [ mainthm0 ] we conclude that .also , since we conclude that as proving the theorem .
|
we consider the problem of efficient simulation estimation of the density function at the tails , and the probability of large deviations for a sum of independent , identically distributed , light - tailed and non - lattice random vectors . the latter problem besides being of independent interest , also forms a building block for more complex rare event problems that arise , for instance , in queuing and financial credit risk modeling . it has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property . we exploit the saddle - point based representations that exist for these rare quantities , which rely on inverting the characteristic functions of the underlying random vectors . these representations reduce the rare event estimation problem to evaluating certain integrals , which may via importance sampling be represented as expectations . further , it is easy to identify and approximate the zero - variance importance sampling distribution to estimate these integrals . we identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property . to illustrate the broader applicability of the proposed methodology , we extend it to similarly efficiently estimate the practically important _ expected overshoot _ of sums of iid random variables .
|
in the last year or so , much of the interest in quantum information theory has been directed towards two related subjects : firstly in analysing so called purification procedures and secondly in exploring the idea of quantum error correction , as well as examining the connections between the two .purification procedures are based on gisin s original proposal to use ` local filters ' to increase correlations between two entangled quantum subsystems . following this a number of other schemeshave been designed for the purpose of local purification .all of these have one idea in common : they all rely on some form of classical communication on which subsequent _ post - selection _ is based .this means that if we start with an ensemble of pairs of particles in a mixed state , the final pure state will invariably have fewer particles .this will be seen as a consequence of the fact that local operations ( i.e. generalised filters ) _ can not _ increase correlations . however , although the increase in correlations can not be achieved , an error correction procedure can always be applied locally , which will maintain the entanglement .we introduce the necessary information theoretic background in section 2 . in section 3we present a simple model of atoms interacting ` locally ' with two entangled cavities and give a number of feedback schemes by which the correlations might possibly be increased , without using any classical communication and post - selection .we show that each of these schemes fails , and we link this to the impossibility of superluminal propagation of any signal . at the end of this sectionwe briefly show how non local interactions can easily be used to increase correlations .section 4 presents two rigorous proofs of the impossibility of increasing correlations locally . in section 5we derive general conditions for error correcting codes , which are then used in section 6 to show that local error correction , in fact , preserves correlations and entanglement . using these considerationswe then present a simple example of how to encode two cavities against a single amplitude error on either cavity using four atoms .in this section we introduce the information theoretical background necessary to understand the results in this paper . we summarise the basic definitions and mathematical framework relevant to the problem , and define the key concepts and quantities which are used to characterise entanglement between systems ( for a more elaborate discussion of quantum information theory see references ) . in this subsectionwe introduce various classical information measures .quantum analogues are then defined in the following subsection .fundamental to our understanding of correlations is the measure of uncertainty in a given probability distribution .0.5 cm _ definition _ 1 .the uncertainty in a collection of possible states with corresponding probability distribution is given by an _entropy _ : called _shannon s entropy_. we note that there is no boltzman constant term in this expression , as there is for the physical entropy , since is by convention set to unity . 0.5 cm we need a means of comparing two different probability distributions , and for this reason we introduce the notion of _ relative entropy_. 0.5 cm _ definition _ 2 .suppose that we have two probability distributions , and ._ the shannon relative entropy _ between these two distributions is defined as this function is a good measure of the ` distance ' between and , even though , strictly speaking , it is not a mathematical distance since .its information theoretic significance becomes apparent through the notion of mutual information . 0.5 cm _ definition _ 3 ._ the shannon mutual information _ between two random variables and , having a joint probability distribution , and marginal probability distributions and is defined as where the latin indices refer to and greek indices to .0.5 cm the shannon mutual information , as its name indicates , measures the quantity of information conveyed about the random variable ( ) through measurements of the random variable ( ) . written in the above form, the shannon mutual information represents the ` distance ' between the joint distribution and the product of the marginals ; loosely speaking it determines how far the joint state is away from the product state , and is hence suitable as a measure of the degree of correlations between the two random variables .we now show how the above measure can be used to determine correlations between two ` entangled ' quantum systems .the general state of a quantum system is described by its density matrix . if is an operator pertaining to the system described by , then by the spectral decomposition theorem , where is the projection onto the state with the eigenvalue .the probability of obtaining the eigenvalue is given by .the uncertainty in a given observable can now be expressed through the shannon entropy . however , to determine the uncertainty in the state as a whole we use the ` von neumann ' entropy. 0.5 cm _ definition _ 4 . _the von neumann entropy _ of a quantum system described by a density matrix is defined as 0.5 cm the shannon entropy is equal to the von neumann entropy only when it describes the uncertainties in the values of a particular set of observables , called schmidt observables ( this is the set of observables that possesses the same spectrum as the density matrix describing the state ) . 0.5 cm _ definition _ 5 .the two quantum systems and are said to be _ entangled _ if their joint state can not be expressed as a convex sum of the direct products of the individual states ; otherwise they are disentangled ( a convex sum of direct products is a sum of the form , where indices and refer to the first and the second subsystem respectively , and ) .0.5 cm the prime example of the entangled state is the epr type state between the two systems , a and b : which obviously can not be expressed as a direct product of the individual states , _ unless _ either or equals zero . to quantify the degree of correlations between the two quantum systems , we introduce the _ von neumann mutual information _ via the notion of the reduced density matrix .if the joint state of the two quantum systems is , then the reduced density matrices of the subsystems and are given by in analogy with the shannon relative entropy between two probability distributions , we define the so called von neumann relative entropy , as a measure of ` distance ' between two density matrices .0.5 cm _ definition _ 6 . given two density matrices and the _ von neumann relative entropy _is defined as : ( where ) .the degree of correlation between the two quantum subsystems is given by the von neumann mutual information , defined by analogy with the shannon mutual information via the concept of relative entropy .0.5 cm _ definition _ 7 . _the von neumann mutual information _ between the two subsystems and of the joint state is defined as from this we can see that the state in eq .( [ epr ] ) is maximally correlated when , whereas the correlations are minimal for either or , i.e. when the state is disentangled . in this paperwe mainly focus on two systems in a _ joint pure state _ in which case the entropy of the overall state , , is zero , and the reduced entropies are equal . in this case , there are no classical uncertainties , and then the degree of correlation is purely quantum mechanical .this is then also called the degree of entanglement .however , for mixed states it is at present not possible to separate entirely quantum from classical correlations and a good measure of entanglement does not exist ( although steps towards resolution of this problem are being taken , e.g ) , which is the reason why we use the von neumann mutual information throughout . in this subsectionwe present without proofs several properties of entropy which will be used in the later sections .these are : ( where and similarly for the others ) .it is also worth mentioning that the consequence of the strong subadditivity is the so called weak subadditivity described by the araki lieb inequality : .this asserts that there is less uncertainty in the joint state of any two subsystems than if the two subsystems are considered separately .we now turn to describing two equivalent ways of _ complete measurement_. in this subsection we present two different ways to describe the dynamical evolution of a quantum system .first we can look at the joint unitary evolution of the system , , and its environment , .the environment can be a similar quantum system to the one we observe , or much larger : we leave this choice completely open in order to be as general as possible .let the joint ` + ' state initially be disentangled , , after which we apply a unitary evolution on ` + ' resulting in the state since we are interested in the system s evolution only , to obtain its final state , , we have to trace over the environment , i.e. another way to obtain the same result is to exclude the environment from the picture completely by defining operators of the ` complete measurement ' which act on the system alone , and therefore to be equivalent to the above system s evolution must satisfy let us now derive the necessary form of s using eq .( [ uni ] ) .let an orthonormal basis of be .then , it can easily be checked that the above s satisfy the completeness relations in eq .( [ comp ] ) .since the choice of basis for is not unique , then neither is the choice of complete measurement operators .in fact , there is an infinite number of possibilities for the operators .note that the dimension of the complete measurement , , is in general different to the dimension of the observed system , and in fact equal to the dimension of .we present here a simple model which aims to _ increase _ the quantum correlations between two entangled subsystems .the model we present employs a technique of performing ` local ' complete measurements . by this , we mean that when the two quantum systems are entangled we perform complete measurements on either subsystem separately , while not interacting directly with the other subsystem .we may regard this result to be counter - intuitive it does not seem at first sight possible that purely local operations could increase the non - local quantum features .there have been many schemes devised whereby correlations can be increased by local measurements on an ensemble of systems combined with _ classical communication _ , followed by a procedure of _ post - selection_. indeed , the model presented here can also be adapted readily to represent such a scheme . however , we verify that by local measurement alone , and without post - selection based upon classical communication , the correlations do not increase . in the next section we present two proofs that this is , in fact , a general result .the models used to demonstrate this are of the ` cavity qed ' type , and are both easy to understand physically and simple to analyse analytically .a good outline of cavity qed is given in .we consider two optical cavities , the field states of which are entangled number states ( for simplicity ) where the subscripts ` a ' and ` b ' refer to the two cavities , and without loss of generality , we assume that .this is a pure state but is not maximally entangled .the aim is to produce the state : i.e. we have made == , which is maximally entangled .two - level atoms are sent , one at a time , through cavity and interact with that individual cavity field , via the jaynes - cummings hamiltonian , for a pre - determined time period .after each atom passes through the cavity , a measurement is made which projects the atomic state into either the ground state or the excited state . due to the entanglement developed between the atom and the field in cavity during the interaction, this measurement also collapses the joint cavity cavity field state into a different superposition , one with either the same number of photons in cavity , or with one extra photon respectively . by successively sending atoms through the cavity for interaction periods determined from the state of the previously measured atom, a _ feedback _ mechanism can be set up whereby one might expect to optimise the probability of achieving the state defined in eq.([m2 ] ) .similar schemes have been used on single cavities for quantum state - engineering .we also consider extensions to this procedure .firstly , we mention procedures for interacting locally with both cavities , the qualitative results of which are the same . and secondly , we give two examples of non - local interactions , which give quite different results to the above local procedures .the first model involves sending atoms through cavity only , a schematic of which is given in fig.1 ; we assume the initial joint cavity field state is given by ( [ m1 ] ) .the first atom is in the excited state , and so the initial atom - field state is after interaction for a time , determined from the atomic time of flight , the joint atom - field state becomes where the coefficients are given by , , , , and .we now arrange that the velocity of the atom , and hence the interaction time with the field , is such that in which case the joint atom - field state becomes from this we see that if we measure the atom in the excited state , the resulting cavity field state is maximally entangled .the probability of measuring the excited atomic state is if we were to prepare a whole ensemble of cavities in precisely the same initial state ( [ m3 ] ) , then after measurement on all of the ensemble members , we would have prepared approximately of the cavities in the maximally entangled state ( [ m2 ] ) .we can discard all the cavities for which we measured the atom in the ground state , and we will have a whole sub - ensemble of cavities for which the entanglement has increased .this is the post - selection procedure mentioned earlier , and always requires that measurements on the whole ensemble be ` thrown away ' in order to increase the entanglement of a sub - ensemble .what we wish to do here is to increase the entanglement on an _ individual pair _ of entangled cavities . instead of performing one measurement on an ensemble of cavities , we keep performing a number of measurements on this single pair until we achieve our aim .when the atom is measured in the excited state , we are there .if the outcome of the atomic state measurement was , the final cavity field state would be the corresponding field state in eq.([m9 ] ) , which is still entangled , but not maximally so .we can now use this field state as a _ new _ initial entangled cavity field . in this way, we would hope that it is just a matter of sending through ` enough ' atoms until the desired state is reached .since the field state corresponding to a ground state measurement involves the fock state , sending through another excited atom allows the possibility of generating an fock state , which takes us further away from the initial state ( [ m3 ] ) .we thus send through a ground - state atom , which can remove the extra photon . using , therefore , this as the starting field - state, we define the ` new ' and as and the joint atom - field state after sending through a ground state atom for time , such that , becomes as before , if the atom is measured in the excited state , then the cavities are left in the maximally entangled field state , once normalised , as desired .the probability for this measurement is it is worth noting at this point that the state of the field , after measuring a ground state atom , is in itself _ less _entangled than the initial state ( [ m1 ] ) .this is a direct consequence of the concave property of entropy when applied to either reduced density matrix .namely , the fact that in one case , when registering an excited atom , the field becomes more entangled than previously ( i.e. the entropy of either reduced system is greater after the interaction ) , implies that the entanglement of the other field state , when we register a ground atom , is ` smaller ' than previously ( i.e. the entropy is smaller than before the interaction ) .this can be quantified as follows .let the reduced field state after the interaction be where is the reduced density matrix for cavity formed from eq.([m9 ] ) , and , are the parts of corresponding to the measurement of an excited or ground state atom respectively . now using the concave property eq.([ep2 ] ) we see that where the first equality follows from the fact that the reduced density matrix does not change during this interaction , which can readily be derived for this example , and is shown generally in the next section .it follows that where is the amount by which the entropy ( and hence entanglement ) of the reduced subsystem is constructed to increase upon measurement of , by arranging atomic interaction times .so , hence , it is immediately seen that and the result ( in this case ) is proven .a small amount of simple algebra applied to eq.([m8 ] ) shows that whatever the initial values of and , the ratio always _ decreases _ unless i.e. the cavities are not entangled in the first place ( a ratio equal to unity implies maximal entanglement ) .we thus have that and .it is readily seen from this , and the fact that and , that thus , there are two effects each time an atom is sent through the cavity the first is that the probability of detecting an atom in the excited state , and hence collapsing the field state to the maximally entangled form , on average decreases with each atom that goes through ; and the second is that the field - state if the atom is measured in the ground state becomes successively more disentangled .the effect is to make it successively more likely that the field will become completely disentangled , rather than completely entangled , which was the original aim .this can be seen mathematically by adding up the probabilities of detecting an atom in the excited state after sending through exactly atoms .if the probability of detection in state after the -th atom is , and the corresponding probability for is , then the probability of detection in after -atoms is the above product term is always less than unity since each and every is individually less than unity , and similarly is always positive since all are individually positive , so the probability of detection of after -atoms is less than unity . in the limit of , it can be verified by a computer program that the above product always tends to the value of .this result has the following consequence . in the limit either register a maximally entangled state or a completely disentangled state .however we could arrange the atom cavity interaction time to be such that this happens when the first atom goes through the cavity . in this caseit can be easily shown that the probability for the maximally entangled state to be registered ( i.e. measuring the excited atomic state ) is exactly .thus , no matter how many atoms we send through the cavity ( one or infinitely many ) , the highest probability of reaching of reaching the maximally entangled state is _ always _ less than unity .we thus see that this scheme can not increase correlations between two entangled systems .we note also that we do not have to aim to achieve maximum entanglement for the particular initial state given by eq.([m3 ] ) .we could continue to send , for example , excited atoms and simply hope to achieve increased entanglement for _ any _ state .however , the same arguments given above also show that we can not increase the entanglement of _ both _ field states corresponding to the two atomic measurement outcomes , as eq.([m17.5 ] ) shows .we should note that if it was possible to increase entanglement by the above local scheme , we would have a means of superluminal communication .namely , the sender of the message could change the entanglement by operating locally on his cavity which could then be detected on the other end by the receiver in possession of the other cavity .the communication would then proceed as follows : two participants would initially share a number of not maximally entangled cavities . then , if the sender does nothing on one of his cavities , this could represent ` logical zero ' , whereas if the sender maximally entangled the cavities this would represent ` logical one ' .after sharing the entangled sets of cavities , the two participants could travel spatially as far away from each other as desired . in this way, they would be able to communicate , through the above binary code , at a speed effectively instantaneously ( only the time to actually prepare the binary states , and to measure them at the other end ) .therefore , we see that the impossibility of locally increasing the correlations is closely related to einstein s principle of causality . this is a curious consequence of quantum mechanics , the postulates of which contain no reference to special relativity .indeed , this could be turned upside down , and viewed as one reason why the above ( or any similar ) scheme would not work .we thus find that the above scheme can not increase correlations by local actions on one cavity alone .we might expect to compensate for this by sending independent atoms through both cavities , and arranging a feedback mechanism based upon classically communicating the knowledge of each state to the other side . in this way, we approach more closely the scheme of classical communication with post selection , but hope to replace the post - selection procedure with that of sending through multiple atoms until we achieve success .we would also expect to avoid superluminal communications since the method inherently involves classical communication between the two observers .the analysis for this problem is very similar to that given above for the one - atom model , except that there is much more freedom to choose which state to measure and how to optimise it . following through a similar reasoning as in the single atom model, it is readily deduced that there is no way in this scheme to increase correlations .there are numerous variations on this above scheme : maximising the probability of detection in , minimising the rate of change of and , and so on , but the basic fact that the probability is never identically unity for any number of atoms remains the same .we now present two simple examples showing how a _ nonlocal _ operation _ can _ increase and , in fact , create correlations and entanglement .the procedures described here can be used to prepare initially entangled states .suppose that the two cavities , a and b , start disentangled in the state : let us send an entangled atomic pair through the cavities , each atom going through one cavity only , with the initial atomic state : after the interaction for the same time the joint state will be : therefore , by simply setting we end up with certainty in the maximally entangled field state . hence nonlocal interactions can , as expected , increase and create correlations and entanglement .the difference between this scheme and the previous two is that entanglement is being transferred to the cavities , from the atoms .this allows the cavity entanglement to ` increase ' , but at the expense of the entanglement of the atoms .this method involves only one atom , first interacting with one cavity and then with the other .this type of entanglement generation " has been analysed in a number of other places .let the initial state of ` atom+fields ' be : after interaction between the atom and the cavity for time the state is the atom now interacts with the cavity for time after which the final state is choosing and the above reduces to : which is the desired , maximally entangled state of the field .thus , the method achieves an entangled cavity state by creating an entangled atom - cavity state , and transferring this to the two cavities alone .the central problem addressed in this paper , and described in the specific examples in the previous section , is summarised in the following theorem : 0.5 cm * theorem*. correlations do not increase during local complete measurements carried on two entangled quantum systems . 0.5 cm we present here two quite separate , but mathematically rigorous proofs of this theorem , the first using the notion of entropy , the second using the ideas of complete measurements and conditional entropy as a measure of relative information .first , we show that , as mentioned in subsection 3.1 , no local complete measurement on subsystem can change the reduced density matrix of , and vice versa .let us perform a complete measurement on , defined by where the identity in the direct product signifies that the other subsystem does not undergo any interaction .let the overall state of ` + ' be described by . then after has undergone a complete measurement , s reduced density matrix is given by : therefore the equality in eq .( [ m17.2 ] ) is now justified .we note also that the s in the above equation can be unitary operators , since .we use this result in _proof.1 _ below .this proof is due to partovi , who proved it as a general result , rather than applied it to increasing correlations by local operations .consider three quantum systems , , , initially in the state described by a density matrix of the form : , i.e. a and b are initially correlated and both completely independent from .we are now going to let and interact and evolve unitarily for time , resulting in the state .the partial trace is defined in the usual fashion , e.g. , and similarly for all the other subsystems .now we use the strong subadditivity applied to at time to obtain but , as the whole system evolves unitarily . also , , since at the beginning is independent from . is only a spectator in the evolution of and , so that , as shown above , , .finally , there are no correlations between and at the beginning , implying : . invoking the definition in eq.([def7b ] ) for quantum correlations , and using the above properties and strong subadditivity in eq.([3.1 ] ) ,we arrive at the following adding another system to interact with locally would lead to the same conclusion , hence completing the proof .this proof is a quantum analogue of the well known classical result that can loosely be stated as ` markovian processes can not increase correlations ' .we will now describe the interactions of with and in terms of complete measurement s performed on .let the state of be initially described by the density operator , whose diagonal elements , , give the probabilities of being in various states , depending on the basis of the density matrix . let this state undergo a complete measurement , described by operators , such that the new diagonal elements are then : let us introduce a relative information measure ( defined in sec.2 ) to : to each value of we assign a nonnegative number .we now wish to compare the distance between and before and after ( and ) the complete measurement , .the distance after the measurement is : where for the inequality in the second line we have used one of the consequences of the concave property of the logarithmic function , and in the fifth line we used the completeness relation given in eq .( [ 3 ] ) .this implies that the distance between the density matrix distribution and the relative information measure decreases by making a complete measurement .if we now consider the particular case where is taken to be a distribution generated by the direct product of the reduced density matrices ( i.e. if we assume no correlations ) , then the result above implies that the full density matrix becomes ` more like ' the uncorrelated density matrix . from this, the theorem immediately follows .we now describe an alternative way of manipulating quantum states which can be described using the language of quantum computation .quantum computation involves unitary operations and measurement on ` quantum bits ' , or _qubits_. a classical bit represents one of two distinguishable states , and is denoted by a ` 0 ' or a ` 1 ' . on the other hand , a qubit is in general a _ superposition _ of the two states , , and its evolution is governed by the laws of quantum mechanics , for which a closed system evolves unitarily .this is reflected in the nature of the elementary gates , which must be reversible , i.e. a knowledge of the output allows inference of the input .the practical realisation of a qubit can be constructed from any two - state quantum system e.g. a two - level atom , where the unitary transformations are implemented through interaction with a laser .an advantage of quantum computation lies in the fact that the input can be in a coherent superposition of qubit states , which are then simultaneously processed .the computation is completed by making a measurement on the output .however , a major problem is that the coherent superpositions must be maintained throughout the computation . in reality, the main source of coherence loss is due to dissipative coupling to an environment with a large number of degrees of freedom , which must be traced out of the problem .this loss is often manifested as some form of spontaneous decay , whereby quanta are randomly lost from the system .each interaction with , and hence dissipation to , the environment can be viewed in information theoretic terms as introducing an error in the measurement of the output state .there are , however , techniques for ` correcting ' errors in quantum states .the basic idea of error - correction is to introduce an excess of information , which can then be used to recover the original state after an error .these quantum error correction procedures are in themselves quantum computations , and as such also susceptible to the same errors .this imposes limits on the nature of the ` correction codes ' , which are explored in this section . in the context of the present paper, we will use error - correction as a way of maintaining coherence between entangled cavities , described in the next section .first we derive general conditions which a quantum error correction code has to satisfy and are , in particular , less restricting than those previously derived in .our derivation is an alternative to that in , which also arrives at the same conditions .we use the notation of ekert and macchiavello .assume that qubits are encoded in terms of qubits to protect against a certain number of errors , .we construct _ code words _ , each being a superposition of states having qubits .these code - words must satisfy certain conditions , which are derived in this section .there are three basic errors ( i.e. all other errors can be written as a combination of those ) : amplitude , , which acts as a not gate ; phase , , which introduces a minus sign to the upper state ; and their combination , .a subscript shall designate the position of the error , so that means that the first and the fourth qubit undergo a phase error .we consider an error to arise due to the interaction of the system with a ` reservoir ' ( _ any _ other quantum system ) , which then become entangled .this procedure is the most general way of representing errors , which are not restricted to discontinuous ` jump ' processes , but encompass the most general type of interaction .error correction is thus seen as a process of disentangling the system from its environment back to its original state .the operators and are constructed to operate only on the system , and are defined in the same way as the operators for a complete measurement described in subsection 2.4 , eq.([cm1 ] ) . in reality, each qubit would couple independently to its own environment , so the error on a given state could be written as a direct product of the errors on the individual qubits . a convenient error basis for a single error on a single qubit is , where the s are the pauli matrices . in this case , the error operators are hermitian , and square to the identity operator , and we assume this property for convenience throughout the following analysis .in general the initial state can be expressed as where the are the code words for the states and is the initial state of the environment .the state after a general error is then a superposition of all possible errors acting on the above initial state where is the state of the environment .note that depends only on the nature of the errors , and is _ independent _ of the code words .the above is , in general , not in the schmidt form , i.e. the code word states after the error are not necessarily orthogonal ( to be shown ) and neither are the states of the environment .now , since we have no information about the environment , we must trace it out using an orthogonal basis for the environment .the resulting state is a mixture of the form , where and . to detect an error ,one then performs a measurement on the state to determine whether it has an overlap with one of the following subspaces the initial space after the error is given by the direct sum of all the above subspaces , .each time we perform an overlap and obtain a zero result , the state space reduces in dimension , eliminating that subspace as containing the state after the error .eventually , one of these overlap measurements will give a positive result which is mathematically equivalent to projecting onto the corresponding subspace .the state after this projection is then given by the mixture , where the successful projection will effectively take us to the state generated by a superposition of certain types of error .one might expect that to distinguish between various errors the different subspaces would have to be orthogonal .however , we will show that this is not , in fact , necessary . after having projected onto the subspace we now have to correct the corresponding error by applying the operator onto , since . in order to correct for the error, the resulting state has to be proportional to the initial state of code words in .this leads to the condition where is an arbitrary complex number .now we use the fact that all code words are mutually orthogonal , _i.e. _ , to obtain that for all and arbitrary .this can be written in matrix form as where the elements of the matrix are given by as eq .( [ 12 ] ) is valid for all it follows that however , we do not know the form of s as we have no information about the state of the environment . therefore , for the above to be satisfied for _ any _ form of s we need each individual term in eq .( [ 13 ] ) to satisfy where is _ any _ complex number . from eqs .( [ 13],[14],[14a ] ) we see that the numbers , , and are related through eq .( [ 14a ] ) is the main result in this section , and gives a general , and in fact the _ only _ , constraint on the construction of code words , which may then be used for encoding purposes .if we wish to correct for up to errors , we have to impose a further constraint on the subscripts ; namely , ,where supp( ) denotes the set of locations where the n tuple is different from zero and wt( ) is the hamming weight , _i.e. _ the number of digits in different from zero .this constraint on the indices of errors simply ensures that they do not contain more than logical s altogether , which is , in fact , equivalent to no more than errors occurring during the process . we emphasise that these conditions are the most general possible , and they in particular generalise the conditions in . by substituting in eq.([14a ] ) , we obtain the conditions given in .these are therefore seen only as a special case of the general result in eq .( [ 14 ] ) .knill and laflamme , who arrive at the same conditions as in eq .( [ 14a ] ) , give no example of a code that violates the conditions in eq .( [ 10 ] ) but satisfies those of eq .( [ 14a ] ) .such a code is given by plenio et al , which by violating the conditions given in eq .( [ 15 ] ) , explicitly shows that they are _ not _ necessary , but merely sufficient .imagine two initially entangled quantum systems and distributed between two spatially separated parties .let , for the sake of simplicity , both and be two spin particles in the initial epr like state where the first ket describes the system and the second the system .let both alice particles be encoded locally ( i.e. adding locally a certain number of auxiliary qubits and performing local unitary transformations to encode ) in order to protect their own qubit against a desired number of errors .we suppose that they both use the same coding , with the code words denoted by and .after the encoding , the state is therefore notice that the entanglement between the systems and is not changed by the encoding procedure , since local unitary operations do not change the spectrum of the reduced density matrices .let this state now be corrupted by errors , , which are local in nature , after which we perform the above described projections in eq .( [ 8 ] ) to obtain we wish to show that the error does not change the value of the entanglement . for thiswe compute s reduced density matrix : which obviously has the same entropy as the original state in eqs .( [ 15],[16 ] ) and eq .( [ 17 ] ) . in the above derivation we used the relations in eq .( [ 14a ] ) such that thus the entropy of the reduced density matrices of the initial pair of encoded systems , and of the systems after undergoing errors are both the same , indicating that the correlations and thus the entanglement do not change during the above described process . by a process of introducing more local degrees of freedom into the problem ,we are able to maintain non - local quantum correlations .so , in fact , this process does also involve discarding information , but is different to the post selection previously described .this is so because all the error correcting particles are introduced locally , and do not form a part of the original ensemble .we now present a simple example of how to locally preserve entanglement between two cavities in the state against a single amplitude error ( action of pauli operator ) on either cavity . for this purposewe _ locally _ introduce a pair of atoms to each cavity , all of which are in the ground state .these atoms interact identically with their respective cavities .we also allow errors to happen to the atoms , as long as there is no more than one error on either side , or .we would like to implement the following interaction in order to encode the state against an amplitude error ( four additional atoms for each cavity are needed to correct against a general type of single error ) this is , in fact , an action of two ` control nots ' , with the control bit being the state of the cavity and the target bits being the atoms and .we therefore perform identical interactions on both cavities and their atoms .this is shown schematically in fig.2 .the state of the whole system ( ` cavities + atoms ' ) will be after the encoding procedure , so all we need to know is how to implement a control not operation between the cavity and one atom .this is done in the following way .let the atom be sent through the cavity , which in our case contains either one or no photons , interacting resonantly with the field .let us in addition have a ` classical ' light source ( a laser ) resonant with the dressed atom - field transition .due to the vacuum rabi splitting this will not be resonant with which is precisely what we need . in this way the initial ` cavity+ atom ' state undergoes evolution of the form which is a control not gate . by repeated action of this gatewe can create the state in eq .( [ ec3 ] ) .then if a single amplitude error occurs on either side ( e.g. a spontaneous decay of the field ) we can correct it by applying a unitary operation to the cavities to restore the original state , depending on the state of the four atoms .let us give a simple example of how this would work .suppose that only the cavity , after encoding , undergos an amplitude error resulting in , after a small rearrangement , the joint ` cavities + atoms+environment ' state of the form ( eq .( [ 7a ] ) ) to recover the original state we first have to decode the above state .this is just the inverse of encoding , i.e. we apply two control - nots as described above , resulting in the state in the second step we can make a measurement on the atoms and depending on the outcome apply an appropriate unitary transformation to the cavities . in this casewe only have to consider cavity : if both of the atoms are in the ground state then we do nothing because the joint - cavity state remains unchanged , whereas if both of the atoms are excited we apply a not operation to cavity .this we do in a fashion similar to performing control not .we could , for example , send an excited atom throught the cavity and tune the external laser to the dressed transition . in this way we recover the state in eq .( [ ec1 ] ) .we emphasise that the form in eq .( [ ec6 ] ) is incomplete since the terms arising from all the other amplitude errors are missing ( corresponding to the cavity and the atoms ) ; however , it can easily be checked that the above scheme would also accomodate for this .in this paper we presented simple models to demonstrate that correlations can not be increased by any form of local complete measurement .the consequence of this is that any purification procedure has to represent a post - selection of the original ensemble to be purified .classical communication is an essential precursor to the post - selection procedure we can not post - select without classical communication , but the post - selection procedure is necessary to prepare the maximally entangled subset .we then presented two general proofs of this fact .additionally , we derived general conditions which have to be satisfied by quantum error correction codes , which can be used to protect a state against an arbitrary number of errors .we then showed that we can locally ` protect ' the entanglement by standard quantum error correction schemes , such that the correlations ( and therefore the entanglement ) are preserved under any type of complete measurement , which can be viewed as an error in this context .we gave a simple example of how to encode two cavities against a single amplitude error .thus , local error correction can protect nonlocal features of entangled quantum systems , which otherwise can not be increased by any type of local actions which exclude classical communication and post - selection .+ the authors thank p.l.knight , and v.v . and m.b.p thank a. barenco , a. ekert and c. macchiavello , for useful discussions on the subject of this paper .this work was supported by the european community , the uk engineering and physical sciences research council and by a feodor - lynen grant of the alexander von humboldt foundation .99 n. gisin , phys . lett . * a 210 * , 151 , ( 1996 ) , and references therein .d. deutsch , a. ekert , r. josza , c. macchiavello , s. popescu , and a. sanpera security of quantum cryptography over noisy channels " , submitted to phys .rev . * a * , ( 1996 ) ; c.h bennett , h.j .bernstein , s. popescu , and b. schumacher , concentrating partial entanglement by local operations " , submitted to phys . rev . * a * , ( 1995 ) .a.m. steane , phys .77 * , july 1996 .a.m. steane , _ multiple particle interference and quantum error correction ._ to appear in proc .lond . a 1996 .a.r . calderbank and p.w .shor , phys .a * 54 * , august 1996 .a. ekert and c. macchiavello , _ error correction in quantum communication ._ lanl e - print quant - ph/9602022 , oxford university .r. laflamme , c. miquel , j.p .paz , and w.h .zurek , phys .lett . * 77 * , 198 , ( 1996 ) .e. knill and r. laflamme , _ a theory of quantum error - correcting code _ , lanl e - print quant - ph/9604015 .m. b. plenio , v. vedral , and p.l .knight , _ quantum error correction of arbitrarily many erors _ , lanl e - print quant - ph/9603022 .bennett , d.p .vincenzo , j.a .smolin , and w.k .wootters , _ mixed state entanglement and quantum error - correction codes _ , lanl e - print quant - ph/9604024 .ekert , d.phil thesis , ( clarendon laboratory , oxford , 1991 ) ; a.k .ekert and p.l .knight , am . j. phys . *57 * , 692 ( 1989 ) and refs . therein .h. everett , iii , the theory of the universal wavefunction " , in the many worlds interpretation of quantum mechanics " edited by b. dewitt and n. graham ( princeton university press , 1973 ) .t.m . cover and j.a .thomas , elements of information theory " ( a wiley - interscience publication , 1991 ) .s.d.j . phoenix and p.l .knight , ann .* 186 * , 381 , ( 1988 ) . h.araki and e.h.lieb , commun* 18 * , 160 , ( 1970 ) .r. horodecki , m. horodecki , information theoretic aspect of quantum inseparability of mixed states , lanl e - print quant - ph/9607007 .a. wehrl , general properties of entropy " , reviews of modern physics vol .* 50 * no . 2 ( 1978 ) .davies , quantum theory of open systems " , ( academic press london , 1976 ) .s. haroche , `` cavity quantum electrodynamics '' , les houches , session liii , 1990 , eds .j. dalibard , j.m .raimond , j. zinn - justin , elsevier science publishers b.v ( 1992 ) .b.w . shore , p.l .knight , j.mod.opt . *40 * , 1195 , ( 1993 ) , and references therein .b. garraway , b. sherman , h. moya - cessa , p.l .knight , g. kurizki , phys .a , * 49 * , 535 , ( 1994 ) .berger , h. giessen , p. meystre , t. nelson , d. haycock , s. hamman , phys .a , * 51 * , 2482 , ( 1995 ) ; c.c .gerry , pyhs .rev . a , * 53 * , 2857 , ( 1996 ) .m.hossein partovi , phys .a * 137 * , 445 ( 1989 ) , and the references therein .a. barenco , quantum computation " , to appear in cont .v. pless , _ introduction to the theory of error - correcting codes _ , john wiley & sons 1982 .a. barenco , c.h .bennett , r. cleve , d.p .devincenzo , n. margolus , p. shore , t. sleator , j.a .smolin , h. weinfurter , phys .a , * 52 * , 3457 , ( 1995 ) .raimond , basics of cavity quantum electrodynamics " , published in quantum optics of confined systems " eds . m. ducloy and d. bloch ( kluwer academic publishers 1996 ) .figure 1 : the experimental setup for local interactions : two cavities are initially entangled in a state of the form , and atoms are sent through cavity only .+ figure 2 : the encoding network for protecting against amplitude errors is shown in the upper diagram : the encircled cross denotes a not operation while a dot denotes a control bit , together making a control not operation .the atoms are initially in their ground states , and the order in which the gates are executed is irrelevant . the lower diagram gives a truth table for the control not operation ; here , ` c ' and ` t ' represent _ control _ and _ target _ bits respectively .
|
we consider the effects of local interactions upon quantum mechanically entangled systems . in particular we demonstrate that non - local correlations _ can not increase _ through local operations on any of the subsystems , but that through the use of quantum error correction methods , correlations can be _ maintained_. we provide two mathematical proofs that local general measurements can not increase correlations , and also derive general conditions for quantum error correcting codes . using these we show that local quantum error correction can preserve nonlocal features of entangled quantum systems . we also demonstrate these results by use of specific examples employing correlated optical cavities interacting locally with resonant atoms . by way of counter example , we also describe a mechanism by which correlations can be increased , which demonstrates the need for _ non - local _ interactions . = 8.0 in = 6.25 in = 0.1 in pacs number(s ) 03.65
|
trimolecular reactions are important components of biochemical models which include oscillations , multistable states and pattern formation . considering their reactant complexes ,trimolecular reactions can be subdivided into the following three forms where , and are distinct chemical species ( reactants ) and symbol denotes products . in what follows, we will assume that product complexes do not include , and .let us denote by the concentration of .then the conventional reaction - rate equation for trimolecular reaction ( [ basic3 ] ) can be written as where denotes ( in general , time - dependent ) reaction rate constant . using mass - action kinetics , rate assumed to be constant and equation ( [ basicode ] ) becomes an autonomous ordinary differential equation ( ode ) with a cubic nonlinearity on its right - hand side .cubic nonlinearities significantly enrich the dynamics of odes .for example , odes describing chemical systems with two chemical species which do not include cubic or higher nonlinearites can not have any limit cycles . on the other hand ,it has been reported that , by adding cubic nonlinearities to such systems , one can obtain chemical systems undergoing homoclinic and snic bifurcations , i.e. oscillating solutions are present for some parameter regimes .motivated by the developments in systems biology , there has been an increased interest in recent years in stochastic methods for simulating chemical reaction networks .such approaches provide detailed information about individual reaction events .considering well - mixed reactors , this problem is well understood .the method of choice is the gillespie algorithm or its equivalent formulations .these methods describe stochastic chemical reaction networks as continous - time discrete - space markov chains .they are applicable to modelling intracellular reaction networks in relatively small domains which can be considered well - mixed by diffusion .if this assumption is not satisfied , then stochastic simulation algorithms for spatially distributed reaction - diffusion systems have to be applied .the most common algorithms for spatial stochastic modelling in systems biology can be classified as either brownian dynamics ( molecular - based ) or compartment - based ( lattice - based ) approaches .molecular - based models describe a trajectory of each molecule involved in a reaction network as a standard brownian motion .this can be justified as an approximation of interactions ( non - reactive collisions ) with surrounding molecules ( heat bath ) on sufficiently long time scales .it is then often postulated that bimolecular or trimolecular reactions occur ( with some probability ) if the reactant molecules are sufficiently close .brownian dynamics treatment of bimolecular reactions is based on the theory of diffusion - limited reactions , which postulates that a bimolecular reaction occurs if two reactants are within distance ( reaction radius ) from each other .the properties of this model depend on the physical dimension of the reactor .considering one - dimensional and two - dimensional problems , diffusion - limited reactions lead to mean - field models with time - dependent rate constants which converge to zero for large times .this qualitative property is in one spatial dimension shared by trimolecular reactions .it has been shown that the mean - field model ( [ basicode ] ) can be obtained with time - dependent rate constant satisfying for large times where is a constant .considering three - dimensional problems , the reaction radius of a diffusion - limited bimolecular reaction can be related with a ( time - independent ) reaction rate of the corresponding mean - field model .trimolecular reaction ( [ basic3 ] ) can then be incorporated into three - dimensional brownian dynamics simulations either directly or as a pair of bimolecular reactions and , where denotes a dimer of .compartment - based models divide the simulation domain into compartments ( voxels ) and describe the state of the system by numbers of molecules in each compartment .compartments can be both regular ( cubic lattice ) or irregular ( unstructured meshes ) .considering that the simulation domain is divided into cubes of side , the diffusive movement of molecules of is then modelled as jumps between neighbouring compartments with rate , where is the diffusion constant of the chemical species . in this paper , we will consider compartment - based stochatic reaction - diffusion models of the trimolecular reactions ( [ basic3])([basic1 ] ) in a narrow domain \times [ 0,h ] \times [ 0,h] ] and , which satisfies partial differential equation ( pde ) where is the macroscopic rate constant of the trimolecular reaction . to formulate the compartment - based stochastic reaction - diffusion model, we divide the domain \times [ 0,h ] \times [ 0,h] ] , . denoting the number of molecules of in the -th compartment, the diffusion of can be written as the chain of chemical reactions " diffusion of and , which appear in trimolecular reactions ( [ basic2 ] ) or ( [ basic1 ] ) , is described using the chains of chemical reactions " of the form ( [ diffgillu ] ) with the jump rates given by and , where and are the diffusion constant of the chemical species and , respectively .trimolecular chemical reactions are localized in compartments , i.e. each of trimolecular reactions is replaced by reactions : where , and denotes the macroscopic reaction rate constant with units [ m s .compartment - based modelling postulates that each compartment is well - mixed .in particular , chemical reactions ( [ diffgillu])([basic1d ] ) can be all simulated using the gillespie algorithm ( or other algorithms for well - mixed chemical systems ) and the system can be equivalently described using the reaction - diffusion master equation ( rdme ) . in particular , the probability that a trimolecular reaction occurs in time interval in a compartment containing one triplet of reactants is equal to where and is chosen sufficiently small , so that .the standard scaling of reaction rates ( [ comprate ] ) is considered in this paper when we investigate the dependence of trimolecular reactions on .it has been previously reported for bimolecular reactions that the standard rdme scaling leads to large errors of bimolecular reactions .one of the goals of the presented manuscript is to investigate the effect of on trimolecular reactions .diffusive chain of reactions ( [ diffgillu ] ) is formulated using a narrow three - dimensional domain \times [ 0,h ] \times [ 0,h] ] , divided into intervals ] into square compartments of size , the mean time until the two molecules react is where is the bimolecular reaction rate with units [ m s , and is the mean number of diffusive jumps until and are in the same compartment , given that they are initially one compartment apart . is the mean number of steps , until diffuses to s location for the first time .the quantities and can be estimated using the theorem in montroll : [ theorem1 ] assume that the molecule has a uniformly distributed random starting position on a finite two - dimensional square lattice with periodic boundary conditions . then the following holds : where is the number of lattice points ( compartments ) in the domain .furthermore , .using theorem [ theorem1 ] and ( [ taubimo ] ) , we have where is the mean time for the first collision of molecules and , which can be approximated by as . using ( [ taubimo2])([taucoll2per ] ), we see that the reaction time for the bimolecular reaction tends to infinity when the compartment size tends to zero .in particular , equations ( [ taubimo2])([taucoll2per ] ) imply that the bimolecular reaction is lost from simulation when tends to zero .thus there has to be a lower bound for the compartment size .this has also been shown using different methods in three - dimensional systems and improvements of algorithms for close to the lower bound have been derived .equations ( [ taubimo2])([taucoll2per ] ) give a good approximation to the mean reaction time of bimolecular reaction for two - dimensional domains , provided that the periodic boundary conditions assumed in theorem [ theorem1 ] are used .however , the chain of chemical reactions ( [ diffgillu ] ) implicitly assumes reflective boundary conditions .such boundary conditions ( together with reactive boundary conditions ) are commonly used in biological systems whenever the boundary of the computational domain corresponds to a physical boundary ( e.g. cell membrane ) in the modelled system . in figure[ fig1 ] , we show that formula ( [ taucoll2per ] ) is not an accurate approximation to the mean collision time for reflective boundary conditions .reflective boundary conditions mean that a molecule remains in the same lattice point when it hits the boundary .we plot results for , for and for in figure [ fig1 ] . in each case, we see that formula ( [ taucoll2per ] ) matches well with numerical results for periodic boundary conditions . in order to find a formula that matches with numerical experiment results for reflective boundary conditions , we fix the coefficient of the first term in ( [ taucoll2per ] ) and perform data fitting on the second coefficient .we obtain the following formula for the reflective boundary condition : the average times for the first collision of and given by ( 18 ) are plotted in figure [ fig1 ] corresponding to different sets of diffusion rates .it can be seen that formula ( [ 2dcollisioncasea ] ) matches well with numerical experiment results with reflective boundary conditions . in figure[ fig2:casea ] , we show another comparison between formula ( [ 2dcollisioncasea ] ) and numerical experiment data with reflective boundary conditions . , ; ( b ) , ; ( c ) , .__,width=288 ] values when is fixed .we use and ._ , width=288 ]in this paper , we will first consider periodic boundary conditions and generalize formula ( [ taubimo2 ] ) to all cases of trimolecular reactions ( [ basic3])([basic1 ] ) in one spatial dimension , using both scalings ( [ comprate ] ) and ( [ comprate2 ] ) of reaction rates .both trimolecular reactions and are special cases of . since we will focus on the simplified situation where there is only one molecule for each reactant of , we may consider as the special case of where diffusion rates for all three molecules are the same , and as the special case where at least two of the three molecules have the same diffusion rates .we will denote the value of mean reaction time by .we will decompose into two parts : where gives the mean time for reaction given that the molecules are initially located in the same compartment , and is the mean time for the first collision , i.e. the average time to the state of the system which has all molecules in the same compartment , given that they were initially uniformly distributed .note that the bimolecular formula ( [ taubimo ] ) was written in the form of a similar decomposition like ( [ trimolformula ] ) .we will call a collision time .we will start with a simple case with and .then the only molecule will be fixed at its initial location . under the periodic boundary condition assumption , without loss of generality , we assume that this molecule is located at the center of the interval ] , where ] . instead of trying to develop a formula for the collision time of two molecules diffusing to a fixed compartment in the one - dimensional lattice , we consider an equivalent problem: imagine a molecule jumps with a diffusion rate within an grid in the two - dimensional space and let its compartment index be .then the two independent random walks by the two and molecules in one - dimensional lattice can be viewed equivalently as the random walk of the molecule in the two - dimensional square lattice with diffusion rate .collision time is then equal to the mean time for the pseudo molecule to jump to the center for the first time , which is the case discussed in section ii .therefore , the formula can also be applied to the trimolecular reaction when and and periodic boundary conditions are considered .formula can not be directly applied to if even if .we can still assume is in the center and consider the equivalent problem of a molecule jumps with a diffusion rate in the axis and in the axis within an grid in the 2d space .the two independent random walks by the two and molecules in 1d space can be viewed equivalently as the random walk of the molecule in the 2d space with diffusion rate and .we thus introduce the following theorem .the proof can be found in appendix [ new2d ] .[ theorem2 ] assume that the molecule has a uniformly distributed random starting position in a 2d lattice and that the molecules can move to nearest neighbours only .assume diffuses with diffusion rate in the direction , in the direction , and .then the following holds : , \end{array}\ ] ] where in this subsection we consider the trimolecular reaction with corresponding diffusion rates , and . without loss of generality , we assume .we consider one pseudo molecule , where and are expressed in terms of locations , and of three molecules by when , and diffuses with rates , and , the pseudo molecule jumps on the 2d lattice corresponding to .when jumps to the origin , , and will be in the same grid , and vice versa .thus the collision time of the trimolecular reaction in 1d is again converted to the corresponding collision time of the bimolecular problem in 2d .the actual grid and jumps are illustrated in figure [ 2dillustrate2 ] .based on . when molecule jumps , the corresponding jumps ( shown in red ) the direction .when molecule jumps , the corresponding jumps ( shown in green ) the direction ; when molecule jumps , the corresponding jumps ( shown in blue ) the direction .the whole domain has a similar shape and is not square . _ ] the difficulty of the mapping is that the resulted domain is not square .so we can not apply the theoretical results presented in the appendix . in order to apply an estimation on a square lattice, we need to further modify the mapping .we will take then the molecule with coordinates will jump in an 2d square lattice , where . of course , the jumps at the boundary will be different from the illustration in figure [ 2dillustrate2 ] , but that is just symmetric reflection .it does not change the validity of the formula derived in the appendix , which is based on the assumption of periodic lattices .the domain resulted from the mapping is shown in figure [ square ] .we have the following approximation formula for the mean jump time . .\end{array } \ ] ] where and * remark : * formula also implies an estimation of the collision time of bimolecular reaction in a 1d domain . in , if we let , the formula leads to an estimation of the mean collision time of a bimolecular reaction in a 1d domain ] and will remains relatively small , when is large .thus we will simply disregard the term in our formula . to sum it up ,when we choose and considering , we disregard and have an estimate of as ^{-1 } + \ { c_1 \log n + c_2 + c_3/n \ } \\ & & + o(n^{-2 } ) + o((1-z)^{1/2 } ) , \end{array}\ ] ] where ,\\ c_3 & = & \frac{\pi}{24 \hat \sigma } ( \eta - 1/3 ) , \end{aligned}\ ] ] with and . according to ,we have now consider the special case .then and the domain is really a square lattice . in this case , we have and if we select ( thus ) , , the term will be relatively small . then multiply with the average time for each jump , where , apply , and disregard lower order terms , we obtain .\end{array}\ ] ] if we assume further that , the equation is close to except a small difference due to the term .note that in this case , the formula is a rigorous estimate .if , and assume the 2d lattice is square , we let and multiply with , we end up with a similar estimation .\end{array } \ ] ] where and the diffusion rate of approaches to infinity , the trimolecular system becomes a bimolecular collision model of and .the formula gives the mean time for bi - molecular collisions , when : in this subsection , we investigate the mean bi - molecular collision time and derive the formula for bi - molecular collision time when the other two molecules have a same diffusion rate .assume a 1d domain of length , two molecules and diffuse freely with the rate and respectively .the 1d diffusion model of two molecules is equivalent to the 2d model in which one molecule diffuses freely with diffusion rates and in the two directions , independently .the first time for the two molecules to collide in the same position is equivalent to the fist time when the molecule in the 2d domain comes across the diagonal line . with the reflective boundary condition ,the triangle domain divided by the diagonal line can be extended into a square domain with the length .the first encounter time of two molecules on the 1d domain of size is now converted to a first exit time of on molecule on a 2d domain of size . in the following , we derive the formula for the first exit time on the 2d domain . for simplicity, we assume the diffusion rate in the two directions are the same .the plank - fokker equation for the diffusion in the 2d domain is given by with being the state density function , defined as the probability density that the molecule stays in position at time given it starts from at time .next , we define a probability function which describes the probability that the molecule stays in the domain at time , given it starts from at time . then , we integrate the plank - fokker equation over the 2d interval and we have the equation for : the initial condition for the pde is given as the four boundaries of the square domain are all absorbing .hence , we have the boundary conditions for pde as following the definition of , gives the probability that the molecule exits before time , which is exactly the distribution function of the first exit time .in addition , the density function of is given by = -\frac{\partial}{\partial t}g(\mathbf{x_0 } , t ; \omega ) , \label{fpt : pdf}\ ] ] and the -th moment of the random variable is therefore given by dt , ( n \ge 0 ) .\label{fpt : moment}\ ] ] for , the formula gives integrating by parts , we get the formula with equation and we can formulate the moments of as coupled ordinary differential equations .multiply the plank - fokker equation through by , integrating the result over all and substitute from equation and , we have the equation the boundary conditions for these differential equations follows the simple derivation from and we have with the equations ready , we can solve for the moments of the first passage time .here we are only interested in the first moment and the solution to the pde of the first moment equation yields if initially the molecule is homogeneously presented in the square domain , we can calculate the mean first exit time as therefore , the mean first time when the first two molecules encounter in the 1d domain of size is exactly the same as the mean first exit time above and the mean first encounter time is given by figure [ bmetime ] and figure [ bmetime2 ] show the numerical results of the mean first collision time for two molecules with the same diffusion rates and for the general situations . the linear data fitting shows the excellent match with the formula .furthermore , although our derivation is only for the case , the numerical results show that the first collision time for the general situation , where , follows the similar formula . this formula is given as for comparison purpose , figure [ periodbme ]gives the numerical results of the mean first collision time , under periodic boundary condition , for two molecules with different diffusion rates .we can see that the numerical results match with very well .
|
trimolecular reaction models are investigated in the compartment - based ( lattice - based ) framework for stochastic reaction - diffusion modelling . the formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution .
|
during 2007 one of the most striking financial crises of the century has manifested itself .this major event has motivated researchers and institutions to devote a large effort for better understanding economic and financial systems evolution , with the aim of detecting reliable signals on upcoming critical events enough in advance to allow regulators to plan effective policies to contrast and overcome them .the vast majority of analyses , however , has focused on financial systems and little theoretical work has been done , so far , on the economic counterpart , though the definition of better early - warning indicators for the economic systems is advocated by many organizations , as the international monetary fund , the united nations and the national central banks . in particular ,little effort has been done to go beyond a detailed description of the effects of the financial crisis on the world trade ( and of the subsequent economic recession ) , after the triennium 2007 - 2009 . with the aim of filling this gap , and complementing the existing vast amount of literature on financial markets , in the present paper we analyse the bipartite world trade web ( hereafter wtw) , by employing a novel method to assess the statistical significance of a number of topological network properties across the period 1995 - 2010 :more specifically , we focus on a recently - proposed class of bipartite motifs , studying their evolution across the aforementioned period .our temporal window of sixteen years allows us to monitor the system both _ before _the year 2007 ( i.e. during the years preceeding the crisis ) and _ after _ it ( i.e. during the critical and post - critical phases ) : as we will show , the considered family of motifs clarifies the role played by the year 2007 in the economic framework of the world trade .our analysis suggests that this year marks a _ crossover _ from a phase characterized by a steep increase of randomness of the wtw topology - and individuates 2007 as the _ last _ year of this critical phase started _ earlier _ in time - to a phase during which a stationary regime seems to be finally reached .indeed , the abundances of the considered family of motifs point out that the crisis explicitly manifests itself after a period of four years during which the wtw has undergone a dramatic structural change .notably , the class of motifs introduced in allows us to focus on _ nodes subsets _ , too .in particular , we analyse the evolution , across the same temporal window , of subgroups of nodes predicated to show remarkable economic homogeneities , with the aim of detecting the effects of the worldwide crisis on various segments of trade - that we consistently define by adopting a classification into macrosectors , as agriculture , manufacture , services , etc . - and on the exports of several national economies - grouped into brics , pigs , g7 , etc .our analysis evidences that some sectors / groups of countries are more sensitive to the cycles of the worldwide economy , providing robust early - warning signals of the 2007 crisis ( and confirming the trends individuated at the global level ) ; others , instead , provide little information on its build - up phase .surprisingly , our study reveals also the existence of subsets of nodes which do not show any relevant internal correlation throughout the whole 1995 - 2010 window , thus questioning the correctedness of the reasoning leading to the individuation of such groups as homogeneous sets .the rest of the paper is organized as follows .section data describes the data set used for the present analysis ; section methods sums up the algorithm implemented for the present analysis , illustrating the null model and the statistical indicators we have considered ; section results presents the results of our analysis that we comment in the section conclusions .in this paper we represent the wtw as a bipartite , undirected , binary network where countries and products constitute the nodes of the two different layers , obeying the restriction that links connecting nodes of the same layer are not allowed .the data set used for the present analysis is the baci world trade database .the original data set contains information on the trade of 200 different countries for 5000 different products , categorised according to the 6-digits code of the harmonized system 2007 .the products division in sectors , instead , follows the un categorisation ; in order to apply it , we have converted the hs2007 code into the isic revision 2 code at 2-digits . in what followswe have proxied countries trade volumes by considering exports ( measured in us dollars ) : after the cleaning procedure operated by comtrade and the application of the rca threshold ( revealed comparative advantage - see the appendix for further details ) , we end up with a rectangular binary matrix ( i.e. the biadjacency matrix of our bipartite , undirected , binary network ) whose dimensions are and , respectively indicating the total number of countries ( for all years of our temporal window ) and the total number of products ( ) considered for the present analysis .the matrix generic entry is 1 if country exports an amount of product above the rca threshold ; otherwise , . in economic terms , each row represents the basket of exported products of a given country ; analogously , each column represents the set of exporters of a given product .the quantities we have considered for the present analysis are the bipartite motifs introduced in , i.e. the v- , - , x- , w- and m - motifs ( see fig .[ motifsapp ] ) . while v - motifs account for the total number of pairs of countries exporting the same products , i.e. with , -motifs account for the total number of pairs of products in the basket of the same countries , i.e. with . however , as shown in the methods section , v- and -motifs provide a rather limited information on the structure of the bipartite wtw : for this reason , we have focused our analysis on more complex ( closed ) patterns , i.e. the x- , w- and m - motifs , which can be compactly expressed as combinations of v- , or - , motifs . in economic terms , our motifs provide a measure of the overlap of countries baskets of products .in particular , x - motifs can be defined as combinations of pairs of v - motifs ( or -motifs ) , subtending the same two countries and products ( see fig .[ motifsapp ] ) : motifs quantify the degree of similarity of the exports of arbitrarily chosen countries , generalizing the measure provided by v - motifs ( applicable to pairs of countries only).,scaledwidth=50.0% ] in other words , x - motifs quantify the co - occurrence of any two countries as producers of the same couple of products ( and , viceversa , the co - occurrence of any two products in the basket of the same couple of countries ) . allowing for a higher number of interacting nodes , w - motifs and m - motifs( see fig .[ motifsapp ] ) can be defined , respectively enlarging the set of countries and the set of products to triples : in order to assess the statistical significance of our observations , the definition of a proper statistical benchmark ( or null model ) is needed : the aim of null models is precisely that of washing out the contribution of some lower - order constraints to the observed network structure . herewe follow ( which builds upon the works dealing with null models for monopartite networks ) and consider the benchmark provided by the _ bipartite configuration model _ ( bicm hereafter ) .in analogy with the monopartite case , such a null model is defined by the constraints represented by the degree sequence of nodes ( i.e. the number of connections for each node ) on both layers ( see methods for further analytical details ) .let us start by comparing the observed abundances of our x- , w- and m- motifs in the real network with the corresponding expected values in the null case plotting , first , the motifs - specific box plots : as specified in the methods section , the latter are intended to sum up the year - specific bicm - induced ensemble distributions through a bunch of percentiles ( see fig .[ motifsz ] , where the distribution mean is explicitly shown as a yellow cross , while the observed abundances of motifs are represented by turquoise - colored dots ) .the observed trends of all motifs suggest the presence of four distinct temporal phases ( 1995 - 2000 , 2000 - 2003 , 2003 - 2007 and 2007 - 2010 ) , the last one starting with a sudden trend inversion in correspondence of 2007 : this evidences that the evolution of our motifs correctly points out the critical character of the year 2007 , after which an overall contraction of the worldwide trade ( which continues at a higher rate in the following years ) is clearly visible .such a contraction is related to a decrease of the number of trades among countries , since the evolution of the total number of links is characterized by a similar trend : this finding highlights that the topological structure of the wtw has indeed started changing since 2007 , in turn confirming that the financial crisis has also affected the world economy . turquoise - colored dots in panels a , b , c of fig .[ motifsz ] represent the abundances of closed motifs measured on the real network .clearly , the results of a purely empirical analysis can point out the peculiarity of the year 2007 only _ a posteriori _, i.e. only _ after _ that the economic recession has manifested itself : stated otherwise , no early - warning signals of the upcoming crisis are provided by the observed trends of motifs .rather , a nave estimation of the future trends of such indicators , solely based on their pre-2007 history , would have probably predicted a trend increase , thus failing in detecting the approaching crisis ( similarly , many economic indicators registered a growth in virtually all countries during the years preceeding the crisis ) .a deeper insight on the system evolution can be obtained by quantifying the discrepancy between the observed abundances of motifs and the corresponding predicted values under the aforementioned null model , i.e. the bicm ( see panels a , b , c of fig .[ motifsz ] ) . indeed , by considering the variation of the turquoise dot position within the interquantile interval, one realizes that the statistical significance of the observed abundances progressively diminishes as 2007 approaches . a compact way of quantifying this tendency is represented by the computation of the motifs -scores , as panel d of fig .[ motifsz ] shows ( the ensemble distributions of our motifs abundances are very accurately approximated by gaussians - see fig .[ check ] in appendix - allowing for the usual interpretation of -scores - see also methods ) .the analysis of -scores reveals us which one of the aforementioned four phases are genuinely statistically significant , drawing an interesting picture .worth to be noticed is the rate of increase of randomness across 2003 , which seems to play a discriminating role between two distinct phases ( thus ruling out any intermediate trend inversions ) : from 1995 to 2003 , the wtw is described by the practically constant level of statistical significance of all the considered motifs ; from 2003 on , the wtw undergoes an evident structural change . in particular , the wtw becomes more and more compatible with the bicm during the quinquennium 2003 - 2007 : the transition towards less negative -scores indicates a progressive increase of the abundance of x- , w- and m- motifs during the considered time - period , pointing out the tendency to establish a number of motifs more and more similar to the expected one . since our null model preserves only the degree sequence while randomizing the rest of the network topology , this implies that 2003 - 2007 can be regarded as a period characterized by a steep increase of randomness ( and , as a consequence , by a loss of internal structure ) , leading e.g. the observed abundance of m - motifs to be exactly reproduced by our null model in 2007 .the evolution of our motifs -scores contributes to shed light on the actual role played by the year 2007 in the economic context of world trade .as our analysis suggests , the truly crucial phase experienced by the wtw is represented by the quinquennium 2003 - 2007 : the year 2007 , as fig .[ motifsz ] illustrates , is the ending point of such a phase started _ earlier _ in time .the loss of internal structure affecting the international trade is related to an increasing similarity of the export basket of different countries .since this effect is not imputable to an increase of the number of country - specific exports before 2007 - our null model automatically discounts it - it seems to represent a genuine tendency of countries , in particular of emerging economies , which tend to align their exports with those of the advanced ones , as it has been recently pointed out .this tendency may have made the whole system more fragile in the years preceding the worldwide crisis , thus negatively impacting on the systemic resilience to propagation of financial and economic shocks .interestingly , it has been recently noticed that an improvement of the world - economy stability may be gained upon adopting strategies similar to those of ecosystems , which call for a larger ( economic ) adaptability based on production and export variety . in the previous subsectionwe have considered the correlations between nodes only at a global level ; let us now focus on specific groups of products or countries .notably , all the aforementioned motifs can be defined onto specific subgroups of nodes as well .for instance , the correlations between products within the same sector can be evaluated by computing .similarly , the abundance of restriced x- , w- and m - motifs can be computed by applying definitions [ xx ] and [ wm ] to restricted v- and -motifs .we would like to stress that , whenever the computation of v- and -motifs is restricted to subsets of nodes , their z - scores are no longer functions of global quantities ( as the total number of links , the total number of nodes , etc . - see the methods section ) , thus providing non - trivial information on the network structural organization .let us consider the subsets of products known as `` macrosectors '' in the economic jargon ( see the appendix for the list of sectors considered in the present paper ) . according to the economic theory , macrosectors are closely related to the so - called vertical markets , i.e. markets in which similar products are developed using similar methods .from the perspective of trade , this leads to the assumption that products belonging to the same sector are ( produced and ) exported together ; this , in turn , translates into the expectation of detecting macrosector - specific patterns showing statistically significant evidences of the predicated similarity and , by converse , leads to interpret systematic changes of such a significance as sectorial early - warning signals .according to many observers , virtually all sectors and countries have been exposed to the effects of the worldwide crisis of 2007 .[ lambda_tot ] shows the evolution of the observed abundances of restricted -motifs ( measured within the products belonging to the same sector ) for all macrosectors : our plots confirm the tendency highlighted in , showing trend inversions in correspondence of the year 2007 which indicate a contraction of trade in , practically , all its segments .nonetheless , only part of the latter seems to provide early - warning signals of the 2007 crisis .-motifs , measured within the products belonging to the same sector ( see the legenda in the bottom panel ) .the trend inversions in correspondence of the year 2007 indicate a contraction of trade in , practically , all its segments .values for each specific year , , have been normalized to range within ] : any discrepancy between observations and expectations leading to values can thus be interpreted as statistically significant . even if -scores have been recognized to be dependent on the network size , our data set collects matrices with very similar volume , leading us to assume this effect to be negligible for making inference on the network evolution . on the other hand , whenever the ensemble distribution of the quantity deviates from a gaussian ( e.g. whenever we deal with rare events ) -scores can not be interpreted in the aforementioned straight way and an alternative procedure to make statistical inference is needed : in these cases we have computed the box plots .box plots are intended to sum up a whole probability distribution by showing no more than five percentiles : the 25th percentile , the 50th percentile and the 75th percentile ( usually drawn as three lines delimitating a central box ) , plus the and the 99.85th percentiles ( usually drawn as whiskers lying at the opposite sides of the box ) .box plots can thus be used not only to provide a description of the ensemble distribution of the quantities of interest ( in the present work , sampled by drawing 5000 matrices ) , but also to assess the statistical significance of their observed value against the null hypothesis represented by the bicm .in fact , inference can be made once the amount of probability not included in the range spanned by the whiskers is known ( in our case , at 99.7% confidence level ) . in the mathematical literature ,our motifs are also known as _ bicliques _ ,i.e. complete subsets of bipartite graphs , where every node of the first ( sub-)layer is connected to every node of the second ( sub-)layer . in the jargon of graph theory , x - motifs are known as graphs , w - motifs are known as graphs and m - motifs are known as graphs ( the first index referring to the countries layer and the second index referring to the products layer ) .interestingly , they have also been shown to provide a meaningful insight into the organization of biological networks ( as genetic , proteic , epidemic ones ) . as stated in results section ,v- and -motifs provide a rather limited information on the structure of the bipartite wtw : a proof of this statement follows .let us recall the definition of v- and -motifs : with and with .the -score of v- and -motifs can be computed analytica ; following and considering that the bipartite wtw is quite sparse across the whole temporal interval ( having , on average , link density , with a variation of ) , the expressions for and simplify to and , i.e. to functions of , which shows a unique trend inversion in correspondence of the year 2007 , whence their little informativeness about early structural changes .in order to retain only the relevant export data , we adopted the revealed comparative advantage ( rca ) threshold where is the amout of product exported by country and the sums and indicate , respectively , the weighted diversification of country and the weighted ubiquity of product . the recipe provided by rca allows us to select the product whose impact on the export basket of country is larger than the impact of on the global market . if , and the corresponding entry of the binary matrix is ; on the other hand , implies that .although the macrosectors `` manufacture of chemicals '' , `` manufacture of fabricated metal products '' and `` textile '' are characterized by the highest values of internal similarity ( thus matching better the definition of `` macrosectors '' in the economic jargon ) , they seem to be insensitive to the approaching crisis , showing remarkably stable tendencies throughout our whole temporal window ( see fig . [ productsz3 ] ) . as shown in the section results , the most informative sectors on the approaching crisis are those characterized by a moderately high level of internal similarity . in the following subsection, we provide the explicit definition of two additional topological quantities , in order to corroborate our findings with the analysis of more traditional network quantities . by applying the same kind of analysis presented in the main text, we show that early - warning signals of the crisis are detectable even analysing nestedness and assortativity .[ [ section ] ] loosely speaking , the degree of `` triangularity '' of the observed biadjacency matrix can be quantified according to a number of measures recently proposed under the common name of nestedness .here we adopt the one proposed in , called nodf , an acronym for `` nestedness metric based on overlap and decreasing fill '' . since the total value of nestedness ( ) is the weighted average of the contribution from rows ( ) and the contribution from columns ( ) , we have considered all of them for the present analysis .naturally , the adopted measure of nestedness does not depend on the rows and columns ordering criterion .if we indicate with the number of columns and with the number of rows , then whose row- and column - specific contributions are provided by where } ] ( if and 0 otherwise ) . [[ section-1 ] ] the second global quantity we have considered to characterize the wtw evolution is the assortativity measure $ ] proposed in , with indicating the maximum observable correlation between degrees ( thus measuring the strongest tendency of links to connect nodes with similar degrees ) and indicating the minimum observable correlation between degrees ( thus measuring the strongest tendency of links to connect nodes with different degrees ) .the definition of the assortativity coefficient can be found in . bymaking it explicit which links contribute to the sums at the numerator and at the denominator , such a coefficient can be rewritten more clearly in terms of the bipartite nodes degrees , as [ [ section-2 ] ] fig .[ nodf]a shows the evolution of the observed abundances of nestedness ( nodf in blue ; nodf in pink ; nodf in purple ) and assortativity ( in brown ) . in particular ,all of them provide evidence of the peculiar character of year 2007 : all trends , in fact , even if characterized by opposite behaviors as the row - specific nodf and assortativity , show an inversion in correspondence of it .-scores , pointing out the peculiarity of the year 2003 in discriminating between a statistically significant phase ( 1995 - 2003 ) and a phase consistent with our null model ( 2003 - 2007 ) .lines joining the dots simply provide a visual aid , scaledwidth=50.0% ] .[ nodf ] let us now carry on the comparison between the aforementioned observed trends and their expected counterparts under the bicm , by plotting the corresponding -scores . as shown by fig .[ nodf]b , nodf and nodf are always consistent with our null model , even if they gradually come closer to the border as 2007 approaches .again , the rate of increase of randomness across and after 2003 is notable . in particular ,the total nodf spans one sigma of statistical significance in just four years ( 2003 - 2007 ) , the same range having been spanned across the previous nine years . as for their observed counterparts ,the assortativity -score is characterized by an opposite trend ; in particular , it provides a clear statistical signal in 2003 by crossing the significance bound .this means that the degree of assortativity of the network becomes less and less significant , to become compatible with the bicm - induced random value four years before 2007 ; it then steadily rises from 2003 until 2008 and seems to maintain such a value afterwards .a more evident signal , confirming the increasingly random character of the network , is provided by the -score of the row - specific nodf , whose analysis allows us to clearly distinguish two distinct phases , the first one lasting from 1995 to 2003 ( characterized by a decreasing trend ) and the second one lasting from 2003 to 2010 ( characterized , instead , by an almost constant trend ) .the biennium across 2003 seems to constitute a somehow crucial period , defined by a decrease of statistical significance of two sigmas ( from to ) ; analogously to what already observed for nodf , the same loss of statistical significance ( from to ) was spanned by nodf in the preceding eight years . as for motifs, the -scores trends considered so far agree in pointing out the peculiarity of 2003 as the year in which the network gets through two different regimes , from a `` structured '' phase , not compatible with our null model , to a increasingly random phase , where nodes correlations become increasingly similar to their random counterpart. 99 hatzopoulos v. & iori g. information theoretic description of the e - mid interbank market : implications for systemic risk .report no .12/04 ( 2012 ) .gabrieli s. the microstructure of the money market before and after the financial crisis : a network perspective ._ ceis tor vergata , research paper series _ * 9*(1 ) , 181 ( 2011 ) .roukny t. , georg c .- p . & battiston s. a network analysis of the evolution of the german interbank market ._ research centre of deutsche bundesbank - discussion papers series _ , report no . 22/2014 ( 2014 ) .squartini t. , van lelyveld i. , garlaschelli d. early - warning signals of topological collapse in interbank networks .rep . _ * 3*(3357 ) , doi:10.1038/srep03357 ( 2013 ) .erola p. , diaz - guilera a. ,gomez s. & arenas a. modeling international crisis synchronization in the world trade web .. media _ * 7*(3 ) , 385 - 397 , doi:10.3934/nhm.2012.7.385 ( 2012 ) .forum on the impact of the financial and economic crisis on world trade and trade policy , 264 - 293 , doi:10.1007/s10272 - 009 - 0305-z ( 2009 ) .gualdi s. , cimini g. , primicerio k. , di clemente r. & challet d. statistically similar portfolios and systemic risk ._ arxiv:1603.05914 _ ( 2016 ) . international monetary fund .world economic outlook - crisis and recovery ( 2009 ) .url : http://www.imf.org/external/pubs/ft/weo/2009/01/pdf/text.pdf date of access:31/06/2016 international monetary fund .world economic outlook - rebalancing growth ( 2010 ) .url : http://www.imf.org/external/pubs/ft/weo/2010/01/pdf/text.pdf date of access:31/06/2016 international monetary fund .world economic outlook - uneven growth .short- and long - term factors ( 2015 ) .url : http://www.imf.org/external/pubs/ft/weo/2015/ + 01/pdf / text.pdf date of access:31/06/2016 shelburne r. c. the global financial crisis and its impact on trade : the world and the european emerging economies ._ united nations economic commission for europe - discussion papers series _ * 2 * ( 2010 ) .united nations .the global social crisis - report on the world social situation ( 2011 ) .url : http://www.un.org/esa/socdev/rwss/docs/2011/rwss2011.pdf date of access:31/06/2016 babecky j. , havranek t. , mateju j. , rusnak m. , smidkova k. & vasicek b. early warning indicators of economic crises : evidence from a panel of 40 developed countries ._ czech national bank working paper series _ * 8 * ( 2011 ) .furceri d. & mourougane a. the effect of financial crisis on potential output : new empirical evidence from oecd countries ._ j. macro ._ * 34*(3 ) , doi:10.1016/j.jmacro.2012.05.010 ( 2012 ) .hidalgo c.a .& hausmann r. the building blocks of economic complexity ._ pnas _ * 106 * , 26 , doi:10.1073/pnas.0900943106 ( 2009 ) .hidalgo c.a . &hausmann r. the product space conditions the development of nations ._ science _ * 317 * , 5837 ( 2007 ) .tacchella a. , cristelli m. , caldarelli g. , gabrielli a. & pietronero l. a new metrics for countries fitness and products complexity .rep . _ * 2*(723 ) , doi:10.1038/srep00723 ( 2012 ) .cristelli m. , gabrielli a. , tacchella a. caldarelli g. & pietronero l. measuring the intangibles : a metrics for the economic complexity of countries and products ._ plos one _ * 8*(8 ) : e70726 , doi:10.1371/journal.pone.0070726 ( 2013 ) .saracco f. , di clemente r. , gabrielli a. & squartini t. randomizing bipartite networks : the case of the world trade web .* 5*(10595 ) , doi:10.1038/srep10595 ( 2015 ) .gaulier s. baci : international trade database at the product - level .url : http://www.cepii.fr/anglaisgraph/workpap/pdf/2010/wp2010-23.pdf .date of access:31/06/2016 http://www.wcoomd.org/ date of access:31/06/2016 .http://www.unstats.un.org/unsd/cr/registry/regcst.asp?cl=8 date of access:31/06/2016 .http://www.macalester.edu/research/economics/page/haveman/trade.resources/tradeconcordances.html#fromisic date of access:31/06/2016 .squartini t. & garlaschelli d. analytical maximum - likelihood method to detect patterns in real networks , _ new ._ * 13 * , 083001 ( 2011 ) .newman m. e. j. the structure and function of complex networks ._ siam rev . _* 45 * , 167 ( 2003 ) .park j. & newman m. e. j. the statistical mechanics of networks .* 70 * , 066117 ( 2004 ) .fronczak a. exponential random graph model . _encyclopedia of social network analysis and mining _ , springer - verlag ( 2014 ) .reinhart c. m. & rogoff k. s. the aftermath of financial crises . _rev . _ * 99*(2 ) , 466 - 472 ( 2009 ) .levchenko a. , lewis .l. t. & tesar l. l. the collapse of international trade during the 2008 - 09 crisis : in search of the smoking gun ._ imf econ .rev . _ * 58*(2 ) , 214 - 253 ( 2010 ) . international monetary fund . changing patterns of global trade ( 2012 ) .url : https://www.imf.org/external/pubs/ft/dp/2012/dp1201.pdf date of access:31/06/2016 world economic forum .building resilience in supply chains ( 2013 ) .url : http://www3.weforum.org/docs/wef_rrn_mo_buildingresiliencesupplychains_report_2013.pdf date of access:31/06/2016 french s. , leyshon a. & thrift n. a very geographical crisis : the making and breaking of the 2007 - 2008 financial crisis .( 2009 ) . popescu m. l. , predescu a. & oancea - negescu m. impact of economic crisis on wood markets ( consumption , production and trade ) . _ the usv annals of economics and public administration _ * 13*(2 ) ( 2013 ) .headey d. & fan s. anatomy of a crisis : the causes and consequences of surging food prices ._ * 39 * , 375 - 391 ( 2008 ) .headey d. the impact of the global food crisis on self - assessed food security .the world bank , doi:10.1596/1813 - 9450 - 6329 ( 2013 ) .the centre for international governance innovation .the effect of the world financial crisis on developing countries : an initial assessment ( 2009 ) .url : https://www.cigionline.org/sites/default/files/task_force_1.pdf date of access:31/06/2016 .gurtner b. la crise economico - financiere et les pays en developpement ._ international development policy _ * 1 * , doi:10.4000/poldev.131 ( 2010 ) . di clemente r. , chiarotti g. l. , cristelli m. , tacchella a. & pietronero l. diversification versus specialization in complex ecosystems ._ plos one _ * 9*(11 ) , e112525 , doi:10.1371/journal.pone.0112525 ( 2014 ) .saracco f. , di clemente r. , gabrielli , a. & pietronero l. from innovation to diversification : a simple competitive model . _ plos one _ * 11*(10 ) , e0140420 , doi:10.1371/journal.pone.0140420 ( 2015 ) .brazys s. , regan a. these little piigs went to market : enterprise policy and divergent recovery in european periphery . _ ucd geary institute for public policy - discussion paper series _ ( 2015 ) .url : http://www.ucd.ie/geary/static/publications/workingpapers/gearywp201517.pdf date of access:31/06/2016 .cimini g. , zaccaria a. & gabrielli a. investigating the interplay between fundamentals of national research systems : performance , investments and international collaborations ._ j. of informetrics _ * 10*(1 ) , 200 - 211 , doi:10.1016/j.joi.2016.01.002 ( 2016 ) .oneill , jim and wilson , dominic and purushothaman , roopa and stupnytska , anna , how solid are the brics , _ global economics paper no : 134 _ ( 2005 ) . squartini t. , mastrandrea r. & garlaschelli d. unbiased sampling of network ensembles ._ new j. phys ._ * 17 * , 023052 ( 2015 ) .squartini t. & garlaschelli d. stationarity , non - stationarity and early warning signals in economic networks ._ j. complex netw . _ * 3*(1 ) , 1 - 21 ( 2015 ) .milo r. , itzkovitz s. , kashtan n. , levitt r. , shen - orr s. , ayzenshtat i. , sheffer m. & alon u. superfamilies of evolved and designed networks ._ science _ * 303 * , 1538 - 1542 ( 2004 ) .zhang y. , phillips c. a. , rogers g. l. , baker e. j. , chesler e. j. & langston m. a. on finding bicliques in bipartite graphs : a novel algorithm and its application to the integration of diverse biological data types. _ bmc bioinformatics _* 15*(15 ) ( 2014 ) .almeida - neto m. , guimaraes p. , guimaraes p.r. jr . , loyola r. d. & ulrich w. a consistent metric for nestedness analysis in ecological systems : reconciling concept and measurement ._ oikos _ * 8*(117 ) , 1227 - 1239 ( 2008 ) .bastolla u. , fortuna m. a. , pascual - garcia a. , ferrera a. , luque b. & bascompte j. the architecture of mutualistic networks minimizes competition and increases biodiversity ._ nature _ * 7241*(458 ) , 1018 - 1020 ( 2009 ) .staniczenko p. p. a. , kopp j. & allesina s. the ghost of nestedness in ecological networks ._ * 4*(139 ) , doi:10.1038/ncomms2422 ( 2013 ) .jonhson s. , dominguez - garcia v. & munoz m. a. factors determining nestedness in complex networks ._ plos one _ * 8*(9 ) , doi:10.1371/journal.pone.0074025 ( 2013 ) .newman m. e. j. assortative mixing in networks .lett . _ * 89 * , 208701 ( 2002 ) .this work was supported by the eu project growthcom ( 611272 ) , the italian pnr project crisis - lab and the eu project simpol ( grant num . 610704 ) .the authors acknowledge andrea tacchella for the data cleaning and giulio cimini , matthieu cristelli , emanuele pugliese and antonio scala for useful discussions .fs and rdc analysed the data and prepared all figures .ag wrote the article .ts planned the research and wrote the article .all authors reviewed the manuscript .
|
since 2007 , several contributions have tried to identify early - warning signals of the financial crisis . however , the vast majority of analyses has focused on financial systems and little theoretical work has been done on the economic counterpart . in the present paper we fill this gap and employ the theoretical tools of network theory to shed light on the response of world trade to the financial crisis of 2007 and the economic recession of 2008 - 2009 . we have explored the evolution of the bipartite world trade web ( wtw ) across the years 1995 - 2010 , monitoring the behavior of the system both _ before _ and _ after _ 2007 . our analysis shows early structural changes in the wtw topology : since 2003 , the wtw becomes increasingly compatible with the picture of a network where correlations between countries and products are progressively lost . moreover , the wtw structural modification can be considered as concluded in 2010 , after a seemingly stationary phase of three years . we have also refined our analysis by considering specific subsets of countries and products : the most statistically significant early - warning signals are provided by the most volatile macrosectors , especially when measured on developing countries , suggesting the emerging economies as being the most sensitive ones to the global economic cycles .
|
to solve the problem of detecting modules in the presence of trees , we introduce the idea of a reluctant backtracking random walker that admits a small probability of returning to a node immediately .the reluctance , but not impossibility , of immediately returning to a node mitigates network hub effects on the spectrum of the operators , while allowing the walker to explore and return from hanging trees unlike the non - backtracking operator or flow matrix .based on this idea of reluctance , we define two new reluctant backtracking operators whose matrix elements are \frac{1}{d_i -1 + \frac{1}{d_j}}\end{aligned}\ ] ] where and represents the probability that the random walker shall move from node to node with nodes and as intermediate nodes .the probability of returning to a node for both operators is inversely proportional to the degree of the node , thus discouraging strongly a return to a high degree node .the operator is a reluctant version of the non - backtracking operator as it allows the additional probability of returning immediately to the node .the operator is a normalised version of the operator just like the flow operator is a normalised version of the non - backtracking operator .the probability of reaching a node is equal to the probability of leaving a node akin to the conservation of current at each node in an electrical network for the normalised operators and .the procedure for detecting the communities is identical for both operators . given the adjacency matrix of a network , we first generate one of the matrices .following krzakala et al . , we calculate its second largest absolute real eigenvalue and the associated eigenvector .the eigenvector has elements corresponding to each directed edge in the network .we group the elements of the eigenvector by the group index of the source node of each edge and sum them up to create a new vector that has elements corresponding to each node in the network .we divide the network into two communities by grouping all nodes that have the same sign ; the sign of each element represents the estimate of the reluctant backtracking operators of the node s community .the indifference of non - backtracking operators towards trees can impair their abilities to detect communities in networks .as an extreme case , consider the network suggested by newman : a network composed of two binary trees connected at a single node .the non - backtracking operator and the flow matrix can not detect communities in such a network , but the reluctant backtracking operators and do .we show this using a network composed of two communities and where each community is a tree and the two communities are connected by a _single _ node .the ratio of number of nodes in community and is denoted by .figure [ fig : example ] shows the performance of the reluctant backtracking operators in detecting the two communities in these simulated networks .the number of nodes in community is fixed at 400 and 500 in figure [ fig : example ] panel ( a ) and panel ( b ) respectively , and the number of nodes in community varies . when the size of the two communities is comparable , the reluctant operators detect communities perfectly since the two communities are almost disconnected except one connection between the two communities and random walkers remain within the same community for substantial periods of time .there is a sharp transition in the ability of the reluctant backtracking operators around in the network where community consists of 400 nodes .when one community becomes much smaller than the other , random walkers keep moving to the larger community from the small community in a short period of time and leads to the loss of performance .there is nothing universal about the transition point fraction as the transition point is a function of the number of nodes in the network and changes to when community has 500 nodes .the exact nature of transition in performance around the transition point is dependent on many factors such as the structure of the network , total number of nodes in the network and the relative sizes of different communities . as a fraction of nodes in community .the triangles and squares show the performance of the two operators in detecting communities as measured by the normalised mutual information ( nmi ) : , where means perfect community detection and means random allocation of nodes to communities ( see methods for more details ) .( a ) 400 nodes in community .number of nodes in community varies from from 40 to 400 .( b ) 500 nodes in community .number of nodes in community b varies from 50 to 500.,title="fig : " ] + networks composed solely of trees are of course very artificial , but we also show that reluctant backtracking operators can detect communities in a more plausible network where the non - backtracking operators fail . consider a more typical network , created by the classic stochastic block model .the addition or deletion of hanging trees to this network or any other will not affect the eigenspectrum of the non - backtracking operator . however , the presence / absence of hanging trees can significantly alter the structure of communities in such a network .stochastic block models provide an easy recipe for constructing networks with specified inter - community and intra - community edge probability .consider a network of nodes with two communities .the probability of an edge between nodes and is given by let be the average degree of the network and denote the degree of mixing between communities in the network .no mixing between the communities implies and complete mixing between the 2 communities implies .we demonstrate the effect of hanging trees by selectively adding leaves to a network based on the stochastic block model .we create a stochastic block model network with 2 communities , each with 500 nodes , .we add one leaf to each node whose number of connections within the community exceeds its connections outside its community by at least 3 .this selects the nodes whose degree is greater than the median and whose membership is slightly ambiguous .figure [ fig : leaves ] shows the eigenspectrum and performance of the non - backtracking operator and reluctant backtracking operator for such a network .the non - backtracking operator does not detect two communities as there is only one real eigenvalue outside the bulk .the additional information provided by the leaves is not available to the non - backtracking operator . on the other hand ,the reluctant backtracking operator accounts for the leaves in the network and its second eigenvector successfully detects two communities . denotes the number of nodes in the network . denotes the number of undirected edges in the network .all the random walk operators are square matrices of order . a ) eigenvalues of a representative non - backtracking matrix .note that there is only 1 real eigenvalue outside the bulk .b ) eigenvalues of a representative reluctant backtracking matrix.,title="fig : " ] + the quality of community detection is inversely proportional to the degree of mixing between different communities in a network .theoretical considerations predict that performance of any spectral method falls to zero below a predictable mixing threshold for simulated networks based on the stochastic block model .the networks become spectrally indistinguishable from erds - rnyi random graphs below the predicted mixing threshold , and therefore no communities can no longer be detected by spectral methods .the minimum network mixing variable , where any spectral method can detect communities in networks based on the stochastic block model is labeled the threshold limit .consequently , simulated networks based on the stochastic block model serve as a useful benchmark for testing the performance of different community detection methods .krzakala et al . showed that the non - backtracking operator can detect communities in sparse networks right down to the theoretical limit where other spectral methods fail .figure [ fig : blockmodel ] shows the performance of the four operators and on a set of networks based on the stochastic block model with nodes with constant average node degree and varying degrees of mixing between communities( ) .the reluctant backtracking operator s performance is comparable to the non - backtracking matrix , but the operator performance falls to chance above the threshold limit .thus the reluctant backtracker accounts for hanging trees in a network , yet there is no penalty for detecting communities down to the theoretical limit .each data point shows the mean and standard deviation of nmi for the different operators as applied to networks with the given mixing parameters ., title="fig : " ] + table [ table : real_data ] and figure [ fig : real_data ] compares the effectiveness of the reluctant and non- backtracking matrices on three real world data sets : zachary karate club , dolphins and word adjacencies . in figure[ fig : real_data ] we plot the distribution of eigenvalues of each operator , showing that both the non - backtracking ( , ) and reluctant - backtracking ( , ) operators have more than one outlying eigenvalue and can thus detect community structure in these networks .the reluctant backtrackers detect communities comparably to their respective non - backtracking counterparts , and there is no loss of performance when using the reluctant matrices rather than the non - backtracking matrices .rather , we found that the main difference in performance depended on whether or not the operators are normalised .this is particularly striking for the dolphin social network , for which the normalised operators perform similarly and both markedly better than the unnormalised versions . and , where is degree , and is an average .these bounds were derived for the stochastic block model , so are used here as an heuristic guide for the distribution of eigenvalues resulting from the real - world networks , and computed using their degree distribution . denotes the number of nodes in the network . denotes the number of undirected edges in the network .all the random walk operators are square matrices of order . ]newman showed that the second leading eigenvector of the flow matrix maximises the widely - used modularity function , connecting the non - backtracking method to the idea of community detection as an optimisation problem .we show that the reluctant backtracking operator also approximately optimises the modularity function .assume an unweighted undirected network of size with edges specified by the adjacency matrix a. the modularity function is defined as \delta_{g_ig_j}\\ a_{ij } & : \textrm{presence / absence of edge between nodes and } \nonumber \\d_i & : \textrm{degree of node } i \nonumber \\ g_i & : \textrm{group membership of node }i \nonumber\\ m & : \textrm{number of edges in the network } \nonumber \label{eq : modularity1}\end{aligned}\ ] ] following newman s setting and notation , assume that the network is divided into two communities and define the dimensional group membership vector with elements denoting the membership of each node in the network .we define the quadratic form if we make the particular choice , meaning that the elements of both vectors and are equal to the group index of the node from which the corresponding edge _ emerges _, then \frac{1}{d_i -1 + \frac{1}{d_j } } s_j s_i \nonumber\\ & = \sum_js_j \sum_{ik } \bigg[\frac{1}{d_i -1 + \frac{1}{d_j } } \frac{1}{d_j } a_{ik } a_{ij } \delta_{jk}+ \frac{1}{d_i -1 + \frac{1}{d_j } } ( 1-\delta_{jk})a_{ik } a_{ij } \bigg ] s_i \nonumber\\ & = \sum_js_j \sum_{i } a_{ij}s_i\bigg[\frac{1}{d_i -1 + \frac{1}{d_j } } ( \frac{1}{d_j } + d_i -1 ) \bigg]\nonumber\\ & = \sum_js_j \sum_{i } a_{ij}s_i \nonumber \\ & = { \mathbf}{s}^t{\mathbf}{a}{\mathbf}{s } \label{eq : deriv1}\end{aligned}\ ] ] also it follows that therefore since the normalised reluctant backtracker also optimises the modularity function , our spectral solution coincides with newman s .we summarise newman s solution here , refer to for further details . solving equation[ eq : modularity_eq ] exactly is hard but an approximate solution can be found by standard relaxation techniques .allow and to independently take any real value rather than only and apply the constraint that .this modified problem can be solved by the method of lagrange multipliers .we get the following equation by introducing the multiplier and differentiating with respect to elements of the leading eigenvector of or the second leading real eigenvector of exactly optimises the relaxed problem .we arrive at the approximate solution of the original unrelaxed problem by setting , i.e. we sum up all the elements of the eigenvector that emerge from node and assign if the sum is positive or if it is negative .this is very similar to the algorithm used by krzakala et al . with the difference that we sum up edges emerging from a node rather the ones incident upon it .we propose a new reluctant backtracking operator to detect communities in sparse networks that accounts for hanging trees .unlike other recent operators such as the non - backtracking matrix and the flow matrix , the reluctant backtracking operator accounts for the presence of hanging trees in a network and its eigenspectrum is shaped by their presence .we demonstrate the utility of the reluctant backtracking operator by detecting communities in simulated networks where the non - backtracking matrix is unable to do so and also show a comparable ability to detect communities in benchmark simulated and real networks .newman showed that the second leading eigenvector of the flow matrix approximately maximises the modularity function by ensuring conservation of probability at each node similar to the conservation of electric current at each node in an electrical network . following a similar argumentwe also show that the eigenvector of the normalised reluctant backtracking matrix approximately maximises the modularity function .an interesting future problem is to extend this method to reliably detect more than two communities .determining the number of communities in a network is a problem by itself and knowing the number of communities in a network can improve the performance of community detection methods .krzakala et al . suggested a heuristic to determine the number of communities in a given network when using the non - backtracking matrix .they derived an approximate analytical bound for the uninformative eigenvalues lying inside the bulk for sparse stochastic block model networks and found that the number of real - valued eigenvalues lying outside the bulk s radius served as a good heuristic to estimate of the number of modules in model networks .newman derived a similar bound for the flow matrix . when applied to real - world networks ,a further heuristic is to compute these bounds using the mean degree of the real - world network and use them as a guide to the number of modules in that network .we plot these approximated bounds for our sample of real - world networks in figure [ fig : real_data ] ; we note that , like the flow matrix , the eigenvalue distribution for our normalised reluctant backtracker is particularly well - behaved with respect to the approximated bounds compared to the unnormalised matrices .we leave the determination of the bound for the reluctant operators for future work , as they do not follow simply from those derived for the non - backtracking matrices . however , because of the approximations involved , the heuristic can fail for real and simulated networks , by predicting too many real - valued eigenvalues outside the bulk and thus predicting too many modules . the optimisation of modularity by the second eigenvector of both the flow and normalised reluctant - backtracker matrices suggests two further solutions for finding more than two communities .the first solution is a more cautious approach that treats the total number of real eigenvalues outside the approximated bulk radius as an upper limit for the number of communities in the network .we can identify these communities by first taking each of the eigenvectors corresponding to the eigenvalues ( remembering that we start from the second eigenvector ) and converting them into a length vector as before we sum over the eigenvector entries corresponding the same source node .we can then cluster in the space defined by these node vectors , using a standard clustering algorithm such as -means : we cluster for each $ ] , and compute for each , retaining the clustering that maximises .the second solution is to apply the iterative bisection algorithm from .we initially divide the network into two communities using the second leading eigenvector of or , then iteratively divide each sub - division using the same algorithm .we compute for each sub - division ( adjusted to account for the remainder of the network ) , stopping when .the difference in performance between the normalised and non - normalised versions of the operators on the real - world networks hints that normalisation is incorporating more information about the network s structure than is available to the unnormalised operator .normalisation adds information about the degree of the transition node in the path to each non - zero element of the matrix of the normalised operators and .by contrast , each path from node in the non - backtracking matrix has an equal weight of 1 irrespective of the degree of the intermediate node .this new information affects the eigenspectrum of the normalised operators , and thus likely leads to the observed differences in community detection performance .precisely how and when this additional information is beneficial for detecting communities is presently unclear , and is the subject of future work .given a network with two partitions that label the community membership of each node , normalised mutual information ( nmi ) quantifies the overlap between these two partitions .nmi serves as a metric to quantify the absolute performance of a community detection method and compare the relative performance of different methods .assume a network with nodes and partitions and . is the subset of nodes in the network that belong to group in partition and is the subset of nodes in the network that belong to group in partition .let and be the number of groups in the partitions and respectively .the confusion matrix captures the overlap between the two partitions , its element counts the number of nodes common to the groups and . normalised mutual information defined as where nmi always lies between and ; only if the partitions and are identical and only if the partitions and are completely independent of each other .+ given the adjacency matrix of a network , we first generate one of the matrices .following krzakala et al . , we calculate its second largest absolute real eigenvalue and the associated eigenvector .the eigenvector has elements corresponding to each directed edge in the network .we group the elements of the eigenvector by the group index of the source node of each edge and sum them up to create a new vector that has elements corresponding to each node in the network .we divide the network into two communities by grouping all nodes that have the same sign ; the sign of each element represents the estimate of the reluctant backtracking operators of the node s community . if the network has less than 500 nodes , we calculated all the eigenvalues and eigenvectors using the eig function in matlab. if the network was larger than 500 nodes , we first calculated the largest 50 eigenvalues by magnitude and the associated eigenvectors using the eigs function in matlab that is suited for sparse matrices and is based on the implicitly restarted arnoldi iteration method .we then selected the eigenvalues whose complex part was less than and finally chose the eigenvalue with the second highest magnitude and its associated eigenvector . * author contributions * : as and mh designed the study . as analysed the data and prepared figures . as andmh wrote the manuscript . +* competing financial interests * : the authors declare no competing financial interests .
|
spectral algorithms based on matrix representations of networks are often used to detect communities but classic spectral methods based on the adjacency matrix and its variants fail to detect communities in sparse networks . new spectral methods based on non - backtracking random walks have recently been introduced that successfully detect communities in many sparse networks . however , the spectrum of non - backtracking random walks ignores hanging trees in networks that can contain information about the community structure of networks . we introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node unlike the non - backtracking operators that forbid an immediate return . we show that the reluctant backtracking operators can detect communities in certain sparse networks where the non - backtracking operators can not while performing comparably on benchmark stochastic block model networks and real world networks . we also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function similar to the flow matrix . interestingly , for this family of non- and reluctant - backtracking operators the main determinant of performance on real - world networks is whether or not they are normalised to conserve probability at each node . many networks have a modular structure . social networks contain communities of friends , collaborators , and dolphins ; brain networks contain groups of correlated neurons , circuits of connected groups , and regions of connected circuits . similarly modular networks occur across biological domains from protein interaction networks to food webs . this range of applications has driven the dramatic development of community detection " methods for solving the core problem of finding modules within an arbitrary network . especially popular are spectral methods based on the eigenvalues and eigenvectors of some matrix representation of the network . these combine speed of execution with considerable information about the network beyond the modular structure , including the relative roles of each node and characterisation of the network s dynamical properties . spectral methods can fail for a range of real networks . these methods rely on the eigenvalues falling into two classes , the vast majority the bulk " following a well - defined distribution , and the outliers from that distribution giving information about the modular structure . topological features of a network unrelated to its modules , such as network hub nodes with high degree , can distort this distinction by introducing eigenvalues outside the bulk that mix with those containing information about modules . sparse networks often contain such network hubs and the outlying uninformative eigenvalues cause the breakdown of spectral methods . unfortunately many real - world networks are sparse ( see table ii in and table 1 in ) . krzakala et al . proposed a new non - backtracking " matrix representation of a network that solves this problem : their matrix represents a random walker on the network who can not immediately return to a node it has just left . the eigenspectrum of the adjacency and related matrices is closely related to properties of a random walker traversing a network . in particular , the eigenspectrum depends on the frequency with which the walker passes through any given node . as the non - backtracking matrix forbids the random walker to return to its immediately previous node , network hubs are not visited disproportionately by this random walker and the spectrum of this random walker is not distorted by the presence of hubs in the network . following this , newman introduced the closely - related flow " matrix that conserved the probability for the random walk . spectral methods applied to these matrices successfully recover modules in sparse networks , down to the theoretical limit for their detection in classes of model networks . however , as noted by newman , these represent an incomplete solution as networks containing trees can not be handled elegantly . because the random walker could not escape from such a tree once entered , trees are ignored despite being candidates for separate modules . in this paper we introduce the reluctant backtracker " approach , which combines the advantages of these new matrix representations by retaining the power of spectral methods for sparse networks with the ability to detect and correctly handle networks with trees . we show that this comes with no penalty for detection performance compared to non - backtracking and flow matrices . rather , we show that the main difference in performance depends on whether or not such matrix representations are normalised to conserve probability . this finding hints at some deeper difference in network structure than modularity alone . we first outline the non - backtracking and flow matrix approaches . both these approaches and ours start from the same representation of the network . assume an unweighted undirected connected network with vertices and edges without self loops . we convert the undirected network into a directed network with edges by replacing the undirected edge with directed edges in both directions ; showing the direction of the edge . the binary non - backtracking matrix has elements , each element corresponding to a pair of directed edges in the network . its elements are given by the non - backtracking matrix is a sparse binary matrix . elements are non - zero only if corresponds to a directed path from to that passes through node with the restriction that nodes and must not be identical , i.e. no backtracking . this matrix encapsulates the biased random walker that is prohibited from returning to its immediately previous node . newman modified the non - backtracking matrix by changing the values of its non - zero elements and called it the flow matrix . the matrix is called the flow matrix based on an analogy with current flow in an electrical network . its elements are given by where is the degree of the node . consider the random walker that starts from node and is passing through node . according to the flow matrix , the random walker is can reach any of the nodes except node with equal probability . the probability of reaching node from node passing through node is . therefore , probability is conserved at node just like current is conserved at each node in an electrical network ; the amount of current entering a node must be equal to the current leaving a node . krzakala et al . and newman respectively showed that the 2nd leading eigenvector of the non - backtracking and flow matrices is very successful in correctly dividing sparse networks into modules .
|
newton s law of gravitation is one of the fundamental laws in the universe that holds everything together .although formulated in the 17th century , scientists today still study the consequences , in particular those of many - body systems , like the solar system , star clusters and the milky way galaxy .general analytic solutions to the n - body problem only exist for configurations with one mass , commonly referred to as solutions , and for two masses ( equivalently named , ) .problems for can be reduced via liouville s theorem for hamiltonian systems to the collisionless boltzmann equation ( * ? ? ?* ; * ? ? ?* but see also ) , and therefore analytic solutions for the global distribution function exist .solutions for in between these two limits are generally realized by computer simulations .these so - called -body simulations have a major shortcoming in that the solution to any initial realization can only be approximated .the main limiting factors in numerically obtaining a true solution include errors due to round - off and approximations both in the integration and in the time - step strategy .these generally small errors are magnified by the exponentially sensitive dependence on the 6n - dimensional phase - space coordinates , position and velocity . as a consequence ,the solution for a numerically integrated self - gravitating system of masses diverges from the true solution .this error can be controlled to some degree by selecting a phase - space volume preserving or a symplectic algorithm and by reducing the integration time step .the latter however , can not be reduced indefinitely due to the accumulation of numerical round - off in the mantissa , which is generally limited to 53 bits ( 64 bits in total , but 11 bits are reserved for the exponent , resulting in only about 15 significant digits ) .the exponential divergence subsequently causes this small error to propagate to the entire system on a dynamical time scale , which is the time scale for a particle to cross the system once .the result of these errors together with the exponential divergence , is the loss of predicting power for a numerical solution to a self gravitating system with after a dynamical time scale .one can subsequently question the predicting qualities of -body simulations for self - gravitating systems , and therewith their usefulness as a scientific instrument .we address this question for by brute - force numerical integration to arbitrary precision .the choice of is motivated by the realization that this represents the first fundamental irregular configuration with the smallest possible number of objects that can not be solved analytically and can not be addressed with collisionless theory .in addition , 3-body encounters form a fundamental and frequently occurring topology in any large -body simulation , and therefore also drive the global dynamics of these larger systems .the divergence between two different , approximate solutions to the -body problem can be quantified by the phase - space distance in the positions and velocities of the particles ( in dimension - less n - body units ) : .\ ] ] values of are obtained by comparing the configurations from solution and solution at any moment in time .each star has a position and velocity in solution a and ( generally ) a different position and velocity in solution b. for each star we calculate its phase - space distance between the two solutions . by dividing by , can be thought of as the average difference per coordinate .the two different runs can be performed either with the same code at a different precision , or with two different codes , all having exactly the same initial realization .a value of indicates that the results of the two simulations have diverged beyond recognition .we consider a solution to be converged to decimal places when , for any time , .( in stable hierarchical few - body systems the value of can vary substantially across the orbital phase , and one has to assure that temporary large deviations can diminish again at a later instant . ) to investigate the build - up of numerical errors and the corresponding exponential divergence , we developed an -body solver for self - gravitating systems which solves the n - body problem to arbitrary precision .this code , named brutus ( boekholt & portegies zwart , in preparation ) , is composed of a bulirsch - stoer integrator , that preserves energy to the level of the bulirsch - stoer tolerance . this tolerance is a parameter which can be interpreted as the discretization error per integration step .the round - off error is controlled by choosing the word - length with which all floating point numbers in the computer code are represented . by decreasing the bulirsch - stoer tolerance and increasing the word - length, we can obtain solutions to the n - body problem to arbitrary precision .we tested brutus by adopting a 3-body system of identical particles , which are located on the vertices of an equilateral triangle , with initial velocities such that the orbits are on a circle around the center of mass . because this system is intrinsically unstable , small perturbations in the position and velocity vectors cause the triangular configuration to dissolve quickly .the time at which this happens depends on precision . using brutus we can reach arbitrary precision , but in this validation experiment we stopped reducing the time step and increasing the word length once the energy was conserved up - to 75 decimal places , which is sufficient to demonstrate our point . for any pre - determined time of stability there is a combination of word - length and bulirsch - stoer tolerance for which brutus converges .we define a solution to be converged when the first decimal places become independent of the size of the time step and the word length .this is equivalent to saying that is always below ; for ( at least the first 3 digits have converged ) , then at all times .having established the possibility of integrating a self - gravitating -body system to arbitrary precision we can study the reliability of n - body simulations in general .we limit ourselves to the problem of 3 bodies , by generating a database of different 3-body problems and solve them until a converged solution is achieved .the positions of the particles are taken randomly from a distribution and are either cold ( zero kinetic energy ) or virialized . in the cold case we assured ourselves that the mutual distances between the particles are initially comparable ( within an order of magnitude ) .we performed runs with identical masses and with the masses in a ratio of 1:2:4 .for each of the four selected ensembles of initial conditions we generated 10 random realizations .the masses and coordinates for these systems are specified in standard double precision , to assure that the double - precision calculations use exactly the same initial realizations as the arbitrary - precision calculations .every initial condition is integrated using the leapfrog - verlet ( , we adopted the implementation available in http://nbabel.org ) and the 4th - order hermite predictor - corrector scheme in a code called ph4 .( both codes , brutus and ph4 are assimilated in the public amuse framework which is available at http://amusecode.org , ) .the integration continues until the system has been dissolved into a permanent binary and a single escaper .dissolution is declared upon the first integral dynamical time upon which one particle is unbound , outside a sphere of 2 initial virial radii around the barycenter , and receding from the center of mass .a particle is considered unbound if its kinetic energy in the center of mass reference frame exceeds the absolute value of its potential energy , which is more strict than adopted in . for a fraction of the simulations( see fig.[fig : fdissolved ] ) , the dissolution time turns out to be very long as the evolution consists of a sequence of ejections where a particle almost escapes , but then still returns to once again enter a 3-body resonance .we therefore put a constraint on the integration time and use the fraction of long - lived systems as a measurable statistic .we obtain ensembles of solutions using the hermite and leapfrog integrators with a time - step parameter .here we adopted the definition for given by .we subsequently recalculate each of these initial realizations with brutus using the same tolerance . in subsequent calculationwe systematically reduce the time - step size and increase the word - length until we obtain a converged solution ( as we discussed in [ sect : validation ] for ) for every realization of the initial conditions .this converged solution is then compared to the earlier simulations performed with the hermite and leapfrog integrators .we now have three solutions for each initial realization of the 3-body problem , one of which is the converged solution .we compare the three solutions for the time of dissolution , the semi - major axis ( or equivalently the reciprocal of the orbital energy ) of the surviving binary , its eccentricity ( equivalent to the angular momentum ) and the escaper s velocity and direction . in fig.[fig :timescatter ] we individually compare the time to dissolution for a certain initial realization as given by the hermite integrator and the converged solution as given by brutus .about half of the individual hermite solutions lie along the diagonal representing the accurate solutions .the other half is scattered around the diagonal .these solutions have diverged away from the converged solution , producing a binary and an escaper with completely different properties . for dissolutions within dynamical times, there is insufficient time for the solution to diverge and the results of the various numerical methods are consistent .but once the hermite or leapfrog solutions have diverged away from the converged solution the entire parameter space of the numerical experiment is sampled .a similar statement holds when instead of comparing the dissolution time , we compare the properties of the binaries or the escapers . ) is on the ordinate and the converged value given by brutus on the abscissa .about 50 of the data points lie on the diagonal which represents the cases for which hermite and brutus gave very similar results .the scatter around the diagonal is symmetric . for very short dissolution times ( dynamical times ), there is insufficient time to grow errors and the results are in agreement .once the divergence becomes important the hermite integrator can return any value allowed in the experiment irrespective of the converged dissolution time .[ fig : timescatter ] ] in fig.[fig : cumulativeerror ] we present the cumulative distribution function of the difference between the time to dissolution of the hermite and brutus calculations : for three different values of , and .the differences for are symmetric around the origin with a dispersion of -body time units , but for it is not symmetric .the distributions in the differences in semi - major axis , eccentricity and the direction of the escaper ( polar and azimuthal angle with respect to the binary plane ) at the time we stop the experiment for down to are symmetric with respect to the origin .the global distributions are statistically indistinguishable via a kolmogorov - smirnov test .we empirically determine that for a value of the time - step parameter the majority of the ensemble conserves energy to better than 1/10 .( solid ) , ( dashes ) and ( dotted curve ) .the black curves give the distribution where the interaction calculated with hermite lasted longer , and the red curves give the absolute value for the cases where converged solutions gave the longest lifetime for the interaction . each ( black - red ) pair of curves for is statistically indistinguishable , and the mean difference is centered around the origin .[ fig : cumulativeerror ] ] in fig.[fig : fdissolved ] we present the fraction of undissolved systems in time .the colored symbols give the converged solutions , whereas the curves give the results obtained using the hermite integrator .the two solutions for each ensemble of initial realizations for ( as well as those obtained with the leapfrog integrator , not shown ) are statistically indistinguishable after comparing realizations of the initial conditions .the distributions obtained using are not symmetric .the duration of stability was studied as a function of accuracy by using the sitnikov problem .they found that the remaining time for the system to stay bound depends on the integration accuracy .our simulations did not reveal this effect , because we study systems that dissolve on a much shorter time - scale . .the virialized plummer sphere with identical masses is represented by the black bullets , and with the range in masses as red triangles .the blue squares and green stars give the results for the cold plummer distribution without and with different masses , respectively .the results of the runs with hermite are statistically indistinguishable from those with brutus . up - to of the four curves can be approximated with exponential decay with a half - life time of 95 , 63 , 42 and 26 crossing times ( from top to bottom ) .[ fig : fdissolved ] ]the properties of the binary and the escaper of a three - body system can be described in a statistical way .this is consistent with the findings in previous analytic and numerical studies .this behavior was named quasi - ergodicity by .we confirm that this behavior remains valid also for converged 3-body solutions . based on the symmetry of the distribution in dissolution times ( see fig.[fig : cumulativeerror ] ), the final parameters of the binary and escaper , as well as the consistency of the mean and median values of the inaccurate simulations when compared to the converged solution ( see fig.[fig : cumulativeerror ] and fig.[fig : fdissolved ] ) we argue that global statistical distributions are preserved irrespective of the precision of the calculation as long as energy is preserved to better than 1/10th of the initial energy of the system .although we have tested only three algorithms for solving the equations of motion we conjecture that the statistical consistency may be preserved also for some other direct methods , and these may also require that energy and angular momentum are preserved to . if such direct -body methods comply to the same statistical behavior for collision - less ( ) systems , it will be interesting to investigate how also other non- algorithms , like the hierarchical tree - method or particle - mesh methods behave in this respect . in studies of self - gravitating systems which adopt the 4th order hermite integrator , energy and angular momentumare generally conserved up to per dynamical time . only those simulations in which this requirement is metare often considered reliable and suitable for scientific interpretation .proof for this seemingly conservative choice has never been provided , and it is unknown whether or not the numerical error and the exponential divergence are not preventing certain parts in parameter space to be accessed , or new physically inaccessible parts in parameters space being explored .we argued that for the resonant 3-body problem the error made during the integration of the equations of motion poses no problem for obtaining scientifically meaningful results so long as energy is conserved to better than about one - tenth of the initial total energy of the system . in that caseresonant 3-body interactions should be treated as an ensemble average , and individual results only contribute statistically . by means of numerical integration until a converged solution is obtained we find that the statistical properties of the binary and the escaper resulting from a 3-body resonant encounter are deterministic .this behavior is not guaranteed to propagate to larger ( see also ) ; requires independent testing , because these introduce more complex solutions in the form of , for example binary - binary outcomes and hierarchical triples .the more extended parameter space for increasing from 3 to is quite dramatic , in particular for solving the system until a converged solution is reached .* acknowledgements * we would like to thank douglas heggie for many in - depth discussions , but also alice quillen , piet hut , jun makino , steve mcmillan , vincent icke and inti pelupessy for discussions and comments on the manuscript , as well as the anonymous referee for careful reading and detailed comments .this work was supported by the netherlands research council nwo ( grants # 643.200.503 , # 639.073.803 and # 614.061.608 ) and by the netherlands research school for astronomy ( nova ) .part of the numerical computations were carried out on the little green machine at leiden university and on the lisa cluster at surfsara in amsterdam ., h. e. 1996 , in r. t. jantzen , g. mac keiser , r. ruffini ( eds . ) , proceedings of the seventh marcel grossman meeting on recent developments in theoretical and experimental general relativity , gravitation , and relativistic field theories , p. 167 , s. , portegies zwart , s. , van elteren , a. , whitehead , a. 2012 , in r. capuzzo - dolcetta , m. limongi , a. tornamb ( eds . ) , advances in computational astrophysics : methods , tools , and outcome , vol .453 of _ astronomical society of the pacific conference series _ , 129 , m. , myllri , a. , orlov , v. , rubinov , a. 2004 , in g. g. byrd , k. v. kholshevnikov , a. a. myllri , i. i. nikiforov , v. v. orlov ( eds . ) , order and chaos in stellar and planetary systems , vol .316 of _ astronomical society of the pacific conference series _
|
the conservation of energy , linear momentum and angular momentum are important drivers for our physical understanding of the evolution of the universe . these quantities are also conserved in newton s laws of motion under gravity . numerical integration of the associated equations of motion is extremely challenging , in particular due to the steady growth of numerical errors ( by round - off and discrete time - stepping , ) and the exponential divergence between two nearby solution . as a result , numerical solutions to the general n - body problem are intrinsically questionable . using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant 3-body interactions produce statistically indistinguishable results . although individual solutions using common integration methods are notoriously unreliable , we conjecture that an ensemble of approximate 3-body solutions accurately represents an ensemble of true solutions , so long as the energy during integration is conserved to better than 1/10 . we therefore provide an independent confirmation that previous work on self - gravitating systems can actually be trusted , irrespective of the intrinsic chaotic nature of the n - body problem .
|
connections between partial differential equations ( pde ) and stochastic processes are always heated topics in modern mathematics and provide powerful tools for both probability theory and analysis , especially for pde of elliptic and parabolic type . in the past few decades , their numerical applications have also burgeoned with a lot of developments , such as the ensemble monte carlo method for the boltzmann transport equation , the random walk method for the laplace equation and the diffusion monte carlo method for the schrdinger equation . in particular , the diffusion monte carlo method allows us to go beyond the mean - field approximation and offer a reliable ground state solution to quantum many - body systems . in this work ,we focus on the probabilistic approach to the equivalent phase space formalism of quantum mechanics , namely , the wigner function approach , which bears a close analogy to classical mechanics . in recent years, the wigner equation has been drawing growing attention and widely used in nanoelectronics , non - equilibrium statistical mechanics , quantum optics , and many - body quantum systems .actually , a branch of experiment physics in the community of quantum tomography are devoting to reconstructing the wigner function from measurements .moreover , the intriguing mathematical structure of the weyl - wigner correspondence has also been employed in the deformation quantization .in contrast to its great theoretical advantages , the wigner equation is extremely difficult to be solved because of the high dimensionality of the phase space as well as the highly oscillating structure of the wigner function due to the spatial coherence .although several efficient deterministic solvers , e.g. , the conservative spectral element method ( sem) and the third - order advective - spectral - mixed scheme ( asm) , have enabled an accurate transient simulation in 2d and 4d phase space , they are still restricted by the limitation of data storage and increasing computational complexity .one possible approach to solving the higher dimensional problems is the wigner monte carlo ( wmc ) method , which displays convergence ( is the number of samples ) , regardless of the dimensionality , and scales much better on the parallel computing platform .the proposed work is motivated by a recently developed stochastic method , termed the signed particle wigner monte carlo method ( spwmc) .this method utilizes the branching of signed particles to capture the quantum coherence , and the numerical accuracy has been validated in 2d situations .very recently , it has been also validated theoretically by exploiting the connection between a piecewise - deterministic markov process and the weak formulation of the wigner equation . in this work, we use an alternative approach to constructing the mathematical framework for spwmc from the viewpoint of computational mathematics , say , we focus on the probabilistic interpretation of the mild solution of the ( truncated ) wigner equation and its adjoint correspondence .in particular , we would like to stress that the resulting stochastic model , the importance sampling and the resampling are three cornerstones of a computable scheme for simulating the many - body wigner quantum dynamics . our first purpose is to explore the inherent relation between the wigner equation and a stochastic branching random walk model , as sketched by the diagram below .{\textup{integral form } } \boxed{\small \text{renewal - type equation } } \xleftarrow{\textup{moment } } \boxed{\small \text{branching random walk}}\ ] ] with an auxiliary function , we can cast the wigner equation ( as well as its adjoint equation ) into a renewal - type integral equation and prove that its solution is equivalent to the first moment of a stochastic branching random walk . in this manner, we arrive at the stochastic interpretation of the wigner quantum dynamics , termed the _ wigner branching random walk _( wbrw ) in this paper .in particular , the -truncated wbrw method recovers the popular spwmc method which needs a discretization of the momentum space beforehand .although the probabilistic interpretation of the wigner equation naturally gives rises to a statistical method , in practice we have encountered two major problems .first , such numerical method is point - wise in nature and not very efficient in general unless we are only interested in the solution at specified points .second , the number of particles in a branching system will grow exponentially in time , indicating that the complexity increases dramatically for a long - time simulations .thus , our second purpose is to discuss how to overcome these two obstacles . as for the first, we introduce the dual system of the wigner equation and derive an equivalent form of the inner product problem , which allows us to draw weighted samples according to the initial wigner distribution . besides , by exploiting the principle of importance sampling , we can give a sound interpretation to several fundamental concepts in spwmc , such as particle sign and particle weight .for the second problem , we firstly derive the exact growth rate of branched particles , which reads in time , with pairs of potentials and a constant auxiliary function and then illustrate the basic idea of resampling to control the particle number within a reasonable size . roughly speaking, we make a histogram through the weighted particles and resample from it at the next step .such a self - consistent scheme allows us to evolve the wigner quantum dynamics in a time - marching manner and choose appropriate resampling frequencies to control the computational complexity .the rest of this paper is organized as follows .section [ sec : wigner ] reviews briefly the wigner formalism of quantum mechanics . from both theoretical and numerical aspects , it is more convenient to discuss the truncated wigner equation , instead of the wigner equation itself .thus in section [ sec : twigner ] , we illustrate two typical ways to truncate the wigner equation , termed the -truncated and the -truncated models . section [ sec : integral_form ] manifests the equivalence between the -truncated wigner model and a renewal - type integral equation , where an auxiliary function is used to introduce a probability measure .besides , the set of adjoint equation renders an equivalent representation of the inner product problem .we will show that such representation , as well as the importance sampling , plays a vital role in wmc and serves as the motivation of wbrw . in section [ sec :branching ] , we will prove that the first moment of a branching random walk is exactly the solution of the adjoint equation .this probabilistic approach not only validates the branching process treatment , but also allows us to study the mass conservation and exponential growth of particle number rigorously .after theoretical analysis , we turn to discuss the main idea of the resampling procedure and present the numerical challenges in high dimensional problems .section [ sec : num_res ] investigates the performance of wbrw by employing sem or asm as the reference .the paper is concluded in section [ sec : con ] .in this section , we briefly review the wigner representation of quantum mechanics .the wigner function living in the phase space for position and wavevector , is defined by the weyl - wigner transform of the density matrix where gives the probability of occupying the -th state , denotes the degree of freedom ( 2 number ) .although _ it possibly has negative values _ , the wigner function serves the role as a density function due to the following properties * is a real function .* . * the average of a quantum operator can be written in a form with the corresponding classical function in phase space .in particular , we can define the _ wigner ( quasi- ) probability _ on a domain by taking to derive the dynamics of the wigner function , we evaluate its first derivative through the schrdinger equation ( or the quantum liouville equation ) combine with the fourier completeness relation and then obtain the wigner equation (\bm{x } , \bm{k } , t ) , \label{eq.wigner}\ ] ] where (\bm{x } , \bm{k } , t ) & = \int_{\mathbb{r}^{d } } \textup{d } \bm{\bm{k}^{\prime } } f(\bm{x},\bm{k}^{\prime},t)v_{w}(\bm{x},\bm{k}-\bm{k}^{\prime},t ) , \label{eq : pd_operator}\\ v_{w}(\bm{x},\bm{k},t)&=\frac{1}{\mi\hbar ( 2\pi)^{d}}\int_{\mathbb{r}^{d } } \text{d}\bm{y } \me^{-\mi\bm{k}\cdot \bm{y}}d_{v}(\bm{x } , \bm{y } , t ) , \label{wigner_kernel } \\ d_{v}(\bm{x } , \bm{y } , t)&=v(\bm{x}+\frac{\bm{y}}{2 } , t)-v(\bm{x}-\frac{\bm{y}}{2 } , t ) .\label{dv}\end{aligned}\ ] ] here the nonlocal pseudo - differential term (\bm{x } , \bm{k } , t) ] ( ) and a simple nullification can be adopted outside , that yields the -truncated wigner equation with the truncated wigner kernel where the rectangular function is given by the truncated wigner kernel in eq .is used only in the case that the close form of is not available . according to eq ., it deserves to be mentioned that only a restriction of the wigner kernel on a finite bandwidth \times [ -2l_{2 } , 2l_{2}]\cdots \times [ -2l_{d } , 2l_{d}] ] , and define the truncated pseudo - differential operator as (\bm{x } , \bm{k } , t)=\frac{1}{\mi \hbar}\int_{\mathcal{y } } { \textup{d}}\bm{y } d_{v}(\bm{x},\bm{y } , t)\widehat{f}(\bm{x } , \bm{y } , t ) \me^{-\mi \bm{k}\cdot \bm{y}}.\ ] ] with the assumption that it decays at , we can evaluate at a finite bandwidth through the poisson summation formula , ~~ \bm{y}\in \mathcal{y},\ ] ] where with being the spacing , , .substituting eq . into eq. leads to (\bm{x } , \bm{k } , t ) & \approx \sum_{\bm{m } \in \mathbb{z}^{d } } f(\bm{x } , \bm{m } \delta \bm{k } , t)\tilde{v}_{w}(\bm{x } , \bm{k}-\bm{m}\delta \bm{k } , t ) , \label{eq : pd_operator_sum}\\ \tilde{v}_{w}(\bm{x } , \bm{k } , t)&=\frac{1}{\mi \hbar}\frac{1}{\left|\mathcal{y}\right| } \int_{\mathcal{y}}{\textup{d}}\bm{y } ~d_{v}(\bm{x } , \bm{y } , t ) \me^{-\mi \bm{y } \cdot \bm{k}}.\label{eq : vwtilde}\end{aligned}\ ] ] here we have let and used the constraint which serves as the sufficient and necessary condition to establish the semi - discrete mass conversation suppose the wigner function at discrete samples are wanted , then we immediately arrive at the -truncated ( or semi - discrete ) wigner equation which indeed provides a straightforward way for stochastic simulations as used in the spwmc method , and possesses the following properties .* the modified wigner kernel in eq . can be treated as the fourier coefficients of ( with a periodic extension ) , and can be recovered by the inverse fourier transform ( possibly by the inverse fast fourier transform ) . *the continuous convolution is now replaced by a discrete convolution ( see eqs . and ) , so that the sampling in discrete -space can be simply realized in virtue of the cumulative distribution function .* the set of equidistant sampling in -space facilitates the data storage and the code implementation .although both truncated models approximate the original problem in some extent , their range of applicability is different .in fact , the modified wigner potential in -truncated model is not a trivial approximation to the original wigner potential ( one can refer to the difference between the fourier coefficients and continuous fourier transformation ) .the convergence is only valid when , or the potential decays rapidly at the boundary of the finite domain ( but this condition is not satisfied for , e.g. , the coulomb - like potential , especially for the coulomb interaction between two particles ) .by contrast , the -truncated model is based on relatively milder assumption , and it is not necessary to change the definition of the wigner kernel unless the poisson summation formula is used .thus we would like to stress that _ the -truncated wigner equation is more appropriate for simulating many - body quantum systems _ and thus adopted hereafter .in order to establish the connection between the deterministic partial integro - differential equation and a stochastic process , we need to cast the deterministic equation into a renewal - type integral equation . for this purpose ,the first crucial step is to introduce an exponential distribution in its integral formulation via _ an auxiliary function . the second one is to split the wigner kernel into several positive parts , such that each part can be endowed with a probabilistic interpretation .more importantly , to make the resulting branching random walk computable , we derive the adjoint equation of the wigner equation and obtain an equivalent representation of the inner product , which explicitly depends on the initial wigner distribution .therefore , it provides a much more efficient way to draw samples on the phase space , and naturally gives rise to several important features of spwmc , such as the particle sign and particle weight .the first step is to cast eq . into a renewal - type equation . to this end, we can introduce an auxiliary function and add the term in both sides of eq . , yielding . \label{eq.wigner_equiv } \end{split}\ ] ] at this stage , we only consider a nonnegative bounded , though a time - dependent can be also introduced if necessary and analyzed in a similar way .in particular , _ we strongly recommend the readers to choose a constant _ in real applications , for the convenience of both theoretical analysis and numerical computation ( vide post ) .formally , we can write down its integral formulation through the variation - of - constant formula f(\bm{x } , \bm{k } , t^{\prime } ) \textup{d}t^{\prime } , \end{split}\ ] ] where denotes the semigroup generated by the operator and is the convolution operator which is assumed to be a bounded operator throughout this work .when is bounded , it only imposes a lyapunov perturbation on a hyperbolic system , so that the operator is also a -semigroup . to further determine how the operator acts on a given function ) ] .for instance , to evaluate the average value at a given final time , we should take then the main goal of this section is to give the explicit formulation of the adjoint equation , starting from eq . and eq . . for brevity , we will assume that the potential is time - independent , and thus the kernels becomes \delta(\bm{x}-\bm{x}^{\prime}).\end{aligned}\ ] ] suppose the kernel is bounded , then it is easy to verify that is a bounded linear operator .accordingly , we can define the adjoint operator by applying theorem 4.6 in directly into the fredholm integral equation of the second kind yields formally , it suffices to define since we have namely furthermore , we perform the coordinate conversion with which the jacobian determinant implying the volume unit keeps unchanged ( i.e. , ) , and thus eq . becomes where we have introduced a _forward - in - time trajectory _ ( in contrast to the backward - in - time trajectory given in eq . ) as follows with being the time increment .actually , eq . motivates us to combine the exponential factor with and define a new function as please keep in mind that , it is required for convenience in the definition , before which is always assumed , for example , see eq . .consequently , from eq . , the inner product can be determined _ only by the ` initial ' data _ , as stated in the following theorem .the average value of a macroscopic quantity at a given final time can be evaluated by where is defined in eq . .according to eq ., in order to evaluate , the remaining task is to calculate . to this end, we need first to obtain the expression of from the dual system . replacing by and performing the coordinate conversion , , in eq .yield where the trajectory reads which is _ not _ the same as given in eq .since the underlying wavevectors and are different ! substituting eq . into eq. leads to where from the dual system , we can also obtain a similar expression to eq . for , and then corresponding time integration with respect to in eq .becomes where combining eq . andthe definition directly gives the adjoint equation for as stated in theorem [ th : adjoint ] .[ th : adjoint ] the function defined in eq .satisfies the following integral equation we call eq . the adjoint equation of eq . is mainly because and constitute a dual system in the bilinear form ( denoted by ) for determining . combining the formal solution of the wigner equation as well as eqs . and directly yields in consequence ,we formally obtain with being the adjoint operator of , indicating that eq . , in some sense , can be treated as an inverse problem of eq . , which produces a quantity from the observation at the ending time .moreover , for given on , we can similarly introduce a probability measure with respect to like because of under the assumption that the auxiliary function satisfies substituting the measure into eq . also yields a renewal - type equation a(\bm{r}(t - t ) , \bm{k})\\ & + \int_{t}^{t } { \textup{d}}\mathcal{g}(t^{\prime } ; \bm{r } , t ) \int_{\mathbb{r}^d } { \textup{d}}\bm{r}^{\prime } \int_{\mathcal{k } } { \textup{d}}\bm{k}^{\prime } \frac{\gamma(\bm{r}^{\prime } , \bm{k}^{\prime } ; \bm{r}(t^{\prime}-t ) , \bm{k})}{\gamma(\bm{r}(t^{\prime}-t ) ) } \varphi(\bm{r}^{\prime } , \bm{k}^{\prime } , t^{\prime } ) .\end{split}\ ] ] before launching into the details of probabilistic interpretation , we would like to emphasize the central role of eq . in computation .hereto we have shown in eq . that the average can be evaluated by sampling from the initial wigner function and solving the adjoint equation for , instead of the direct calculation based on as shown in eq . .actually , the bilinear form for determining serves as the foundation of wbrw in which the importance sampling plays a key role . regarding of the fact that the wigner function may take negative value , we have to introduce an _ instrumental probability distribution _ follows where is the normalizing factor ( we assume ) now the inner product problem can be evaluated through the importance sampling .owing to the markovian property of the linear evolution system , it suffices to divide the time interval ] according to the following rules . without loss of generality , the particle ,starting at time at state , having a random life - length and carrying a weight , is marked .the chosen initial data corresponds to those adopted in the renewal - type equation .rule 1 : : the motion of each particle is described by a right continuous markov process .rule 2 : : the particle at dies in the age time interval with probability , which depends on its position and the time ( see eq . ) . in particular , when using the constant auxiliary function , the particle dies during time interval with probability for small , which is totally independent of both its position and age . rule 3 : : if , the particle dies at age at state , and produces new particles at states , , , , endowed with updated weights , , , , respectively .all these parameters can be determined by the kernel function in eq .: and thus for where the function is the normalizing factor for both and , i.e. , because of the mass conservation , and rule 4 : : if , say , the life - length of the particle exceeds , so it will immigrate to the state and be frozen .this rule corresponds to the first right - hand - side term of eq . , andthe related probability is rule 5 : : the only interaction between the particles is that the birth time and state of offsprings coincide with the death time and state of their parent . in eqs . and, we require that and can be normalized , which is not necessarily true for . nevertheless , when the convolution operator is bounded and decays in -space sufficiently rapidly , we can approximate the solution of the wigner equation by the -truncated correspondence because of the boundedness of the semigroup and thus it suffices to restrict our discussion on the -truncated wigner equation and its adjoint counterpart .now we present the main result .let be the index set of all frozen particles with the same ancestor initially at time at state carrying the weight , denote the collection of corresponding frozen states , and the updated weight of the -th particle .accordingly , the adjoint equation is solved by where means the expectation with respect to the probability law defined by the above five rules ( its definition is left in section [ sec : branching : analysis ] ) .furthermore , from eq . , the quantity is solved by ,\ ] ] where means the expectation with respect to .the proof of eqs . andis left for section [ sec : branching : analysis ] . in the theory of branching process ,all the moments of an age - dependent branching processes satisfy renewal - type integral equations , where the term `` age - dependent '' means the probability that a particle , living at , dies at might not be a constant function of . in this regard, it suffices to define a stochastic branching markov process ( continuous in time parameter ) , corresponding to the branching particle system as described earlier .the random variable of a branching particle system is the family history , a denumerable random sequence corresponding to a unique family tree .firstly , we need a sequence to identify the objects in a family . beginning with an ancestor , denoted by , and we can denote its -th children by . similarly , we can denote the -th child of -the child by , and thus means -th child of -th child of of the -child of the -th child , with .the ancestor is omitted here and hereafter for brevity .our branching particle system involves three basic elements : the position ( or ) , the wavevector and the life - length , and each particle will either immigrate to , then be killed and produce three offsprings , or be frozen when hitting the first exit time .now we can give the definition of a family history , starting from one particle at age at state .in the subsequent discussion we let . a family history stands for a random sequence where the tuple appears in a definite order of enumeration . , , denote the life - length , starting position and wavevector of the -th particle , respectively .the exact order of is immaterial but is supposed to be fixed .the collection of all family histories is denoted by . at this stage , the initial time and the initial state of the ancestor particle assumed to be non - stochastic . for each ,the subfamily is the family history of and its descendants , defined by .the collection of is denoted by .equivalently , we can also use the time parameter to identity the path of a particle and all its ancestors in the family history , and denote its state ( i.e. , starting position and wavevector ) at time . taking the particle as an example , we have with , with and , with and , . to characterize the freezing behavior of the particle, we denote the first exit time by , as boundary conditions are not specified .[ def : frozen ] suppose the family history starts at time . then a particle is said to be frozen at if the following conditions hold in particular , when , the ancestor particle is frozen . sometimes the particle is also called alive in the time interval ] and any function in }) ] , and should be finite ( see theorem [ th : finite ] ) .this further means that is finite almost surely . as the easier case, the finiteness of for the constant auxiliary function is directly implied from theorem 13.1 and its corollary of chapter vi in .[ th : finite ] suppose the family history starts at time at state , and ends at . then and as a consequence . .we define a random function when the particle appears in the family history , otherwise . from eq . anddefinition [ def : frozen ] , we have let that corresponds to the number of particles born up to the final time .it is obvious that for constant , we introduce an exponential distribution and define its -th convolution by it can be readily verified that , ~ \forall \,\bm{r}\in\mathbb{r}^d , ~ \forall \ , u\in [ 0,t],\ ] ] holds for a sufficiently large integer , e.g. , .we first show by the mathematical induction that there exists a sufficient large integer and a sufficient large constant such that .\ ] ] for , we only need and then have assume eq .is true for .direct calculation shows which implies that eq. holds for , where is short for . finally , using eqs .and yields and implies is bounded for the infinite series is convergent ( see lemma 1 of the appendix to chapter vi in ) . hence the proof is completed according to eq . .moreover , according to definition [ eq : exit ] and theorem [ th : finite ] , we can directly show that is integrable , say , provided that is essentially bounded .that is , both in eq . and in eq .are well defined . with the above preparations , we begin to prove eqs . and .[ th : psi ] the first moment defined in eq .equals to the solution of the adjoint equation .let correspond to the case in which the particle travels to and then is frozen .the probability of such event is by rule 4. then the remaining case is denoted by .accordingly , from eq . , we have and direct calculation gives which recovers the first right - hand - side term of eq . .when event occurs , it indicates that offsprings are generated . notice that and thus we have where we have applied rule 3 .substitute eq . into the second right - hand - side term of eq. leads to where we have used the markov property as well as the mutual independence among the subfamilies inherited in rule 5 . finally , by the definition, we have then the first right - hand - side term of eq . becomes where we have let and used eqs . and .the remaining right - hand - side term of eq .terms can be treated in a similar way , and putting them together recovers the second right - hand - side term of eq . .we complete the proof .so far we have proven the existence of the solution of the adjoint equation , while its uniqueness can be deduced by the fredholm alternative .it remains to validate eq . .to this end , we let be a probability measure on the borel sets of with the density , then it yields a unique product measure . consequently , from eq . , we obtain \cdot \frac{f(\bm{r } , \bm{k } , 0)}{f_i(\bm{r } , \bm{k } , 0 ) } ~ \nu({\textup{d}}\bm{r}\times { \textup{d}}\bm{k } ) \\ = & \iint_{\mathbb{r}^d \times \mathcal{k } \times \omega } \mu_a ( \omega ) \cdot \frac{f(\bm{r } , \bm{k } , 0)}{f_i(\bm{r } , \bm{k } , 0 ) } ~ \nu\otimes\pi_{0 , \bm{r } , \bm{k}}({\textup{d}}\bm{r } \times { \textup{d}}\bm{k } \times { \textup{d}}\omega ) \\ = & \mathbb{e}_{f_i}\left [ \pi_{t_l , \bm{r}_\alpha , \bm{k}_\alpha } \left ( s_\alpha(0 ) \cdot h(0 ) \cdot \sum_{i\in\mathcal{e}(\omega_\alpha ) } \phi_{i,\alpha } \cdot a(\bm{r}_{i,\alpha},\bm{k}_{i,\alpha } ) \right ) \right ] , \end{split}\ ] ] and thus fully recover eq .( noting that and denote the same set ) .in particular , we set , , , and then it s easy to verify that is the unique solution of eq .using the mass conservation . by theorem [ th : psi ] , the first moment of also equals to 1 , i.e. , and then it further implies due to eqs . and , which is nothing but the mass conservation law .furthermore , for such special case , we can show in theorem [ thm_3 ] that is almost sure for any family history , not just the first moment as shown in eq . .it implies that any estimator using finite number of super - particles is still able to preserve the mass conservation with probability . [ thm_3 ] suppose that the ancestor particle starts at and carries a weight .then we have where the event is given by since only takes odd values , it suffices to take and it is easy to see . in the remaining part of the proof , we will omit for brevity .assume that the statement that is true for , ] is according to theorem 15.1 of chapter vi in , the expectation satisfies the following renewal integral equation we substitute eq . into eq . and then can easily verify that is the solution .the proof is finished .considering the fact that randomly generated may be rejected in numerical application according to the indicator function in eq ., we can modify eq . by replacing with , with the average acceptance ratio of . in consequence ,the modified expectation of total particle number is .theorem [ th : exp ] also provides an upper bound of when the variable auxiliary function satisfies .whatever the auxiliary function is , it is clear that the particle number will grow exponentially . _ to suppress the particle number , we can either decrease the parameter or choose a smaller final time . finally , suppose we would like to evolve the branching particle system until the final time .usually , there are two ways .one is to evolve the system until each particle is frozen at the final time in a single step .the other is to divide into with , and then we evolve the system successively in steps .however , the following theorem tells us that both produce the same .[ th : g(t ) ] suppose .then is a markov branching process .in addition , is a galton - watson process .we recall that for a galton - watson model , if , then . from eq . , we know that , so that .therefore , we can not expect to reduce the particle number by simply dividing into several steps and evolve the particle system successively , which also manifests the indispensability of resampling . as illustrated in section [ sec : importance_sample ] , it suffices to set the initial and final time to be and , respectively .thus from the integrability of in eq . and the strong law of large number in eq ., we can use the following estimator to calculate eq . . with ancestor particles drawn from , which converges almost surely when . according to eq ., we would like to point out three important features below .( 1 ) : : it is unnecessary to know the normalizing factor in eq . , since it has been absorbed in the sign function in eq . .( 2 ) : : it is unnecessary to take multiple replicas of branching particle system starting from the same ancestor because we only need to evaluate the expectation of with respect to the product measure . ( 3 ) : : theorem [ thm_3 ] ensures the mass conservation property. it must be mentioned that the conserved quantity is the summation of particle sign function , instead of total particle number .in fact , the total particle number may increase in order to capture the negative values of wigner function .unfortunately , theorem [ th : exp ] has presented an unpleasant property of such estimator , namely , the exponentially increasing complexity . in thisregard , a resampling procedure , which is based on the statistical properties of the wigner function and density estimation method , must be introduced to save the efficiency , say , to reduce the particle number from to .the first step is to use the non - parameter density estimation method ( the histogram ) to evaluate through the branched particles on a given suitable partition of the phase space .the instrumental density can be simply estimated by eq . .the successive step is to draw new samples according to the resulting piecewise constant density .the main problem is how to determine the phase space partition .the simplest way , as suggested in , is using the uniformly distributed cells in phase space : , then is estimated by a piecewise constant function with .then the number of particles allocated in each cell is determined by and the sign by .the position and wave vector are assumed to be uniformly distributed in each cell .this approach , usually termed _ annihilation _ in previous work , e.g. , can reduce the particle number effectively for and still works fairly for ( as shown in section [ sec : num_res ] ) .unfortunately , it can not work for higher dimensional systems because of the following problems , as also manifested in the statistical community .( 1 ) : : in high dimensional situations , the dimension of the feature space ( phase space cells ) is too much higher than the sample number , leading to a non - sparse structure and severe over - fitting .( 2 ) : : the uniform distributed hypercube in high - dimensional space is not very useful to characterize the edges of samples . ( 3 ) : : the piecewise constant function is discontinuous in nature , so that sampling from a locally uniform distribution may cause additional bias . to resolve these problems , one can utilize many advanced techniques in the statistical learning and density estimation .the key point is to choose an appropriate ( or feature in statistical terminology ) of the partition . in principle, must be chosen to strike the balance between accuracy and efficiency .too small is unable to capture the fine structure of the wigner function , whereas too large may increase the complexity and overfit .for instance , a possible approach is to resort to tree - based methods to partition the phase space .considering that all the statistical techniques are devised for estimating a positive semidefinite density , instead of the quasi - distribution , we need to separate the positive and negative signed particles into two groups and make individual histograms , then merge them into a piecewise constant function .such pattern is based on the decomposition of the signed measure . in a word , the story in the higher dimensional phase space is totally different because how to efficiently implement the so - called annihilation exploiting the cancelation of weights of opposite sign is still in progress .the resampling in high dimensional phase space is a complicated issue and beyond the scope of this paper , so we would like to discuss it in our future work . in section [sec : num_res ] , we mainly focus on several typical tests for and and show the accuracy of wbrw as well as of the piecewise constant approximation by comparing with two accurate deterministic solvers , i.e. , sem and asm . in summary , the outline of wbrw is illustrated below from to with the time step , .it suffices to take as the initial state , and for the offsprings in eq . .step 1 : sample from : : the first step is to sample ancestor particles according to the instrumental distribution ( see eq . ) .each particle has a state and carries an initial weight and a sign . in general, we can simply take .+ for the gaussian barrier with and nm is utilized in the -truncated wigner simulations.,scaledwidth=49.0%,scaledwidth=38.0% ] step 2 : evolve the particles : : the second step is to evolve super - particles according to the rules of branching particle systems .suppose a particle is born at ] into subintervals with the partition being the annihilations occur exactly at the time instant for , at which the particle number decreases significantly from to , where ( resp . ) represents the particle number before ( resp .after ) the annihilation . for convenience ,we denote the particle number at and by and , respectively . in each time period ] with , we record the starting particle number , the ending particle number , and the related growth rate in eq . .let table [ tab : eg1 - 1 ] gives numerical values of above three quantities , see the fifth , sixth and seventh columns . according to table [ tab : eg1 - 1 ], we can figure out the following facts on the efficiency .ccccccccc & & & & & & & & time + + & 2.6210e-01 & 7.1421e-02 & 7.6208e-02 & 48.63 & 3.95 & 12.45 & & 11.33 + & 2.5998e-01 & 6.7069e-02 & 8.4044e-02 & 49.58 & 3.94 & 12.71 & 13.04 & 12.48 + & 1.3034e-01 & 3.1041e-02 & 5.5122e-02 & 479.96 & 2.87 & 167.36 & 170.12 & 113.62+ + & 3.3899e-01 & 1.0171e-01 & 1.9413e-01 & 3.22 & 2.50 & 1.29 & & 25.33 + & 3.4201e-01 & 1.0674e-01 & 1.9785e-01 & 3.23 & 2.50 & 1.29 & 1.29 & 25.87 + & 3.3697e-01 & 1.0959e-01 & 1.9413e-01 & 4.05 & 2.43 & 1.67 & 1.67 & 28.98 + & 3.4011e-01 & 1.1157e-01 & 1.9734e-01 & 5.18 & 2.40 & 2.16 & 2.16 & 29.80 + ( 1 ) : : agreement between the mean value and the theoretical prediction is readily seen in all situations . in this case , the average acceptance ratio almost equals to one due to the localized structure of the wigner kernel .actually , the growth rates in the first five annihilation periods , e.g. for and , are 5.92 , 5.87 , 5.88 , 5.89 , and 5.88 , all of which are almost identical to the mean value of 5.89 .when , the former is a little less than the latter , because the particles moving outside the computational domain are not taken into account . within each annihilation period , the maximum travel distance of particles can be calculated by implying that , the larger value is , the more particles move outside the domain .this explains the slight deviation between and increases from almost zero to at most 1.33 as increases from to .moreover , when , the increasing multiples for are identical to those for , i.e. , is independent of , which has been also already predicted by the theoretical analysis .more details about the agreement of the growth rates of particle number with the theoretical prediction for and can be found in fig .[ fig : ex1-np ] .( 2 ) : : not like using the constant auxiliary function , we do not have a simple calculation formula so far for the growth rate of particle number when using the variable auxiliary function ( i.e. , depending both on time and trajectories ) .however , we can still utilize the growth rate for the case of to provide a close upper bound for the case of . as shown in the seventh column of table [ tab : eg1 - 1 ] , the variation of the mean growth rate between them is about 0 , 0.03 , 0.12 and 0.74 for 0.1fs , 1fs , 2fs and 4fs , respectively .[ fig : ex1-np ] further compares the curves of growth rate for and within four typical annihilation periods . by comparing with the wigner functions shown in fig .[ fig : ex1-wf ] , we find that the closer to the center the gaussian wavepacket lives , the larger the deviation between the curves for the constant and variable auxiliary functions becomes .such deviation in accordance with the analysis of shown in fig .[ fig : xi ] validates the proposed mathematical theory again .( 3 ) : : during the resampling ( annihilation ) procedure , the main objective is to reconstruct the wigner distribution using less particles , which explores the cancelation of the weights with opposite signs , see eq . . the sixth column of table [ tab : eg1 - 1 ] tells us that the maximum particle numbers after resampling for are all around 3.00 million , implying that there should be a minimal requirement of particle number to achieve a comparable accuracyotherwise , the accuracy will decrease , for example , the values of for are around 2.55 million .[ fig : ex1-npaa ] shows more clearly the typical history of .we can find there that , no matter how huge the particle number before the annihilation ( which depends on both and ) is , the particle number after the annihilation for the simulations with comparable accuracy exhibits almost the same behavior , which recovers and extends the so - called bottom line " structure described in .such behavior may depend only on the oscillating structure of the wigner function . on the other hand , highly frequent annihilations like this bottom line structure and thus the accuracy , see fig .[ fig : ex1-npaa - b ] , implying that there are no enough particles to capture the oscillating nature . * experiment 2 * in this example , we increase the barrier height to so that the gaussian wavepacket will be totally reflected , see fig .[ fig : ex2-wf ] .such augment of the barrier height implies that the growth rate of particle number now is about times larger than that for experiment 1 , and thus it is more difficult to simulate accurately .based on the observations in experiment 1 , we only test two groups of annihilation periods , . table [ tab : eg2 - 1 ] summarizes the running data and confirms again that , the larger constant auxiliary function improves the accuracy , whereas the higher annihilation frequency destroys the accuracy . in order to get a more clear picture on this accuracy issue , we plot both spatial and momental probability distributions at different time instants in fig . [ fig : ex2-sm ] against the reference solutions by sem .we can easily see there that , the loss of accuracy when using is mainly due to that there are no enough generated particles to capture the peaks reflecting off the barrier ; while the increase of accuracy when using a larger constant auxiliary function , e.g. , , comes from the smaller variation .actually , similar phenomena also occur in experiment 1 . as a typical two - body system ,the two - body helium - like system has been considered in testing deterministic wigner solvers in 4d phase space . herewe utilize again a helium - like system in which the electron - nucleus and electron - electron interaction is given by where the parameter expresses the screening strength , denotes the position of the nucleus , and is the position of the -th electron .in fact , is the green s function of the 1d screened poisson equation .the wigner kernel of the electron - nucleus interaction reads and that of the electron - electron interaction therefore we can still use a simple rejection method to draw samples from the target distribution . herewe use the atomic unit , set and and adopt the same initial data as used in .the computational domain ^ 2\times [ -4 , 4]^2 ] .+ + to monitor the accuracy , we record the relative errors of the reduced single - body wigner function as given in , and of corresponding marginal probability distributions .[ fig : ex3-error ] shows the history of those relative errors .we can see there that , although the reduced wigner function is comparatively less accurate , it can still yield a more accurate estimation of macroscopically measurable quantities , such as the spatial and momental marginal probability distributions .this also explains why we see more noise in fig .[ fig : ex3-wf ] for the reduced wigner function than in fig .[ fig : ex3-sm ] for the marginal distributions .the possible reason may lie on the fact that if we wish to be able to estimate a function with the same accuracy as a function in low dimensions , then we need the size of samples to grow exponentially as well .however , it can be readily observed in figs .[ fig : ex3-wf ] and [ fig : ex3-sm ] that the main features captured by wbrw are almost identical to those by asm .finally , we would like to mention that the growth of particle number is closely related to the number of cells ( dimensionality of feature space ) . in this example, we use a uniformly distributed cells for the resampling and set the initial particle number to be about with the total weighted summation being . itis shown in fig .[ fig : ex3-pn ] that the particle number increases soon to , which is comparable to the cell number , and then approaches a stable value around .so if we refine those cells for the resampling , then the particle number will increase to a higher level .actually , for higher - dimensional problems like , the number of cells is much higher than that of samples and such a simple cell based resampling strategy can not achieve an efficient annihilation .hence we have to resort to other advanced techniques to control the sample size in higher - dimensional phase space .this paper is devoted to the mathematical foundation of the branching random walk algorithm for the many - body wigner quantum dynamics .although several concepts , such as the signed particle , the adjoint equation and the annihilation procedure , have already been mentioned in previous work , unfortunately related mathematical results are somewhat fragmented or lack of systemic elaboration , and the crucial issues , such as the annihilation of particles and the computational complexity , fall outside the scope of any current available theory .thus , our original motivation is to provide a framework from the viewpoint of computational mathematics within which all these problems can be fully addressed , and interested readers may get a complete view of the wigner branching random walk algorithm accompanied with both derivation and implementation details in a single reference . only by this way can we analyze its accuracy , point out the numerical challenge and make further improvements .in fact , we have shown that the signed particle is naturally introduced according to the principle of importance sampling , the motion of particles is described by a probabilistic model , and the annihilation is nothing but the resampling from the instrumental distribution .although the theoretical part of this work is closely related to that shown recently in , we adopt a different approach to interpreting the entire story . actually , both approaches succeed in validating the basis of the spwmc , namely , eq . .the reason we prefer to the branching random walk model , a mixture of the branching process and the random walk , is that the theory of branching process not only provides a natural interpretation of growth of particles , but also allows us to calculate the particle growth rate exactly and discuss the conservation property .these results are extremely important in real simulations since it gives us a reasonable criterion to control the computational complexity and allocate computational resources efficiently .we must admit that the numerical challenges in higher dimensional phase space are very potent , though the numerical accuracy in a 4d helium - like system has also been validated in this work .the often - used simple cell based resampling technique can not work even for 6d problems .therefore it is urgent for us to seek an efficient way to reduce the sample size , and some advanced statistical density estimation methods might be taken into account .this research was supported by grants from the national natural science foundation of china ( nos . 11471025 , 91330110 , 11421101 ) .the authors are grateful to the useful discussions with zhenzhu chen , paul ellinghaus , mihail nedjalkov and jean michel sellier on the signed particle monte carlo method for the -truncated wigner equation .
|
a branching random walk algorithm for the many - body wigner equation and its numerical applications for quantum dynamics in phase space are proposed and analyzed . after introducing an auxiliary function , the ( truncated ) wigner equation is cast into the integral formulation as well as its adjoint correspondence , both of which can be reformulated into the renewal - type equations and have transparent probabilistic interpretation . we prove that the first moment of a branching random walk happens to be the solution for the adjoint equation . more importantly , we detail that such stochastic model , associated with both importance sampling and resampling , paves the way for a numerically tractable scheme , within which the wigner quantum dynamics is simulated in a time - marching manner and the complexity can be controlled with the help of an ( exact ) estimator of the growth rate of particle number . typical numerical experiments on the gaussian barrier scattering and a helium - like system validate our theoretical findings , as well as demonstrate the accuracy , the efficiency and thus the computability of the wigner branching random walk algorithm . * ams subject classifications : * 60j85 ; 81s30 ; 45k05 ; 65m75 ; 82c10 ; 81v70 ; 81q05 * keywords : * wigner equation ; branching random walk ; quantum dynamics ; adjoint equation ; renewal - type equations ; importance sampling ; resampling ; signed particle monte carlo method
|
to understand and analyze well the complex structure as well as the evolutionary process of genes , researchers have long been searching for syntactical models .one such model was a grammatical model provided by formal language theory .yet , the grammar types in the chomsky hierarchy was inadequate in describing the biological systems . in his pioneering work , tom head has proposed an operation called ` splicing ' for describing the recombinant behavior of double - stranded dna molecules which established a new relationship between formal language theory and the study of informational macromolecules . splicing operation is a formal model of the recombinant behavior of dna molecules under the influence of restriction enzymes and ligases .informally , splicing two strings means to cut them at points specified by the given substrings ( corresponding to patterns recognized by restriction enzymes ) and to concatenate the obtained fragments crosswise ( this corresponds to the ligation reaction ) .since then , the theory of splicing has become an interesting area of formal language theory , where results of splicing systems on string languages ( splicing systems were later renamed as h - systems to indicate the originator ) gave new insights in some ( closure ) properties of families of string languages .the mathematical study of the splicing operation on the strings has been investigated exhaustively , which lead to a language generating device viz ., extended h - systems ( eh - systems ) using the ` splicing operation ' as the basic ingredient .several control mechanisms were suggested in increasing the computing power of eh systems with finite components , equivalent to the power of turing machines .thus , splicing operation on strings has lead to universal computing device ( programmable dna computers based on splicing ) .a splicing operation contains splicing rules of the form , where are strings over some alphabet .we apply the splicing rule to two strings , , ( are strings over ) . as a result, the new strings and are obtained .we use the modified definition of splicing as it appears in dna sequences are three dimensional objects in a three - dimensional space .some problems arise when they are described by one - dimensional strings .so , the other models of splicing were explored . in ,array splicing systems were studied . in , graph splicing systems were discussed .but these systems can not be applied to the graphs that can not be interpreted as linear or circular graphs .hence , we take a different approach to splicing two graphs and introduce a splicing system for graphs which can be applied to all graphs . splicing two graphscan be thought of as a new operation among the graphs , that generates new graphs from the given two graphs .+ hence , in this article , the following section discusses the cutting rules , which is the basic component for the proposed graph splicing system .section 3 deals with the graph splicing system with illustrations .the section 4 studies some graph theoretical properties of this system .the last section concludes with the directions for the future research in this graph splicing system .we follow the terminologies and the basic notions of graph theory as in and the terminologies of formal language theory as in .+ for any finite alphabet , a labeled graph over is a triple where is the finite set of vertices(or nodes ) , e is finite set of edges of the form , , where each edge is an unordered pair of vertices and is a function from to .an edge means that one end - point of the edge is the vertex and the other end - point is the vertex .edge set of is written as and the vertex set of g by .the number of vertices of a graph is called the order of the graph and the number of edges of the graph is called the size of the graph .we consider only simple graphs where repeated edges ( multiple edges ) with same end - points and edges with both end - points same ( loops ) are not allowed .the graph refers to an unlabeled graph .we denote an unlabeled graph just as , instead of .whenever a graph is considered , we mean only a simple unlabeled graph .we mention accordingly , when we consider the graphs other than the above one .a graph is said to be in pseudo - linear form ( plf ) if the ordered vertices are positioned as per the order , as if they lie along a line and the edges of the graph drawn accordingly .ordering of the vertices can be done in any way . for a particular ordering ,the adjacency matrix of and the adjacency matrix of in plf , remain the same . in a graph, the vertices could be positioned at any place and the edges of the graph drawn accordingly . for the graph in plf ,vertices are first ordered and positioned as if they lie on a line .this line may be a horizontal line or a vertical line or any inclined line . in case ,the line is horizontal , we can position the ordered vertices either from left to right or from right to left .so , with out loosing any generality , we position the vertices from left to right as if the vertices lie on a horizontal line .once a graph is in plf , we name the vertices with a positive integer that represent their order in the ordering .if a vertex is second in an ordering , we name that vertex as .so , the vertex set of in plf is . given an ordering of the vertices , any graph can be redrawn in the pl form .for example , if the vertices of the graph + + are ordered as , the corresponding graph in plf is + + a graph in plf will look like a path graph with edges going above or below the linear path .the graph with vertices written horizontally , is a graph in plf . from nowonwards , unless otherwise mentioned , we mean a graph as the one in plf for some ordering of the elements of .a cutting rule for a graph in plf is a pair ] , we use the square braces and for the edges ( i , j ) , we use the parenthesis ] where and are positive integers . by the condition , we mean that the the vertices and may be successive vertices ( ordered successively ) or both vertices and are the same . a cutting rule ] cuts a graph between the vertex and the vertex ( if for some reasons , the vertices are named with symbols other than the positive integers , the cutting rule cuts between the vertex that comes in the position in the ordering and with the vertex in the position ) .the work of the cutting rule ] cuts the following edges ( if they exist in the graph ) . 1 .the edge 2 . the edges 3 . the edges 4 .the edges the reflexive cutting rule cuts the vertex and all the edges that go above as well as below the vertex .i.e. , the reflexive cutting rule cuts the following . 1 .the vertex 2 .the edges when an edge is cut into two parts , we call the the two parts of the edge as hanging - edges or free - edges .similarly , when a vertex is cut , we call that vertex as a hanging - vertex or a free - vertex .if an edge is cut , we write the left part of the edge as ] and are drawn as illustrated with a at their free ends .if a vertex is cut , the left part of the vertex is written as ] indicates just that the vertex is cut .+ the set represents the set of all edges of that got cut by the cutting rule and the ( the cardinality of the set ) is the power of the cutting rule with respect to the graph .power of a cutting rule with respect to indicates the number of edges that got cut by that cutting rule in .the set represents the set of all vertices that got cut by the vertex . only for the reflexive cutting rules ,the set will exist and for all the other cutting rules , this set is .since any reflexive cutting rule can cut only one vertex , the set is always singleton . for a reflexive cutting rule ,the set can be ( means that no edge is going above or below the vertex in the graph ) .+ when a graph is cut into two by a cutting rule , the left part of the graph is called as and the right part is called as . obviously , ) = ecut_{g}([i , i ] \cup ecut_{g}([j , j ] \cup \{(i , j)\}. ] .+ + + the and are also graphs with the vertex set and with the edge set ,(1,4],(1,5],(2,3],(2,4],(2,5]\}.\ ] ] similarly , and . ) = \{(1,3),(1,4),(1,5),(2,3), ] .a splicing rule , is a pair of cutting rules . given two graphs , and a splicing rule , the first graph is cut as specified by and the second graph is cut as specified by . as a result we get the four cut - graphs viz . ,,, and . + + * mode of recombination * + ( or ) recombines with the ( or ) if and only if = and = .in other words , for a recombination , the number of hanging - edges in ( or ) should be the same as that of the number of hanging - edges in (or ) and the the number of hanging - vertices in ( or ) should be the same as that of the number of hanging - vertices in (or ) .the above definition tells that for a splicing process to end up in a recombination , the power of both the cutting rules present in the splicing rule should be the same . we assign a positive integer , called the power of the splicing rule , to every splicing rule if and only if the powers of and are the same and the common value is the power of the splicing rule .further , if one cutting rule in is reflexive , the other should also be reflexive .every hanging - edge of the ( or ) recombines ( or joins ) with only one hanging - edge of the ( or ) , and every hanging - edge of ( or ) has the recombination with only one hanging - edge of ( or ) .the hanging - vertex ( if available ) of the ( or ) recombines with the hanging - vertex of (or ) .thus , recombines with to generate new graphs . after the recombination , we order the vertices of the new graph ( this will be in pl form ) in the same sequence as it appears and name them accordingly .new graphs are generated because of the recombination of the edges that are cut .if there are more than one hanging - edges in both and , the hanging - edges of the can recombine with the hanging - edges of in more than one way .if there are hanging - edges in both and , the hanging - edges can recombine in ways , generating new graphs .in other words , the number of such recombinations is just the number of bijective mappings from the set to the set . when the recombines with the , the same number of will be generated .thus , the splicing of two graphs and using a splicing rule of order , generates new graphs .+ thus , splicing process comprises of cutting as well as the recombination .if the splicing of and using generates a new graph by the recombination of the with the , we denote that by ( indicating that is the first splicing product ) .similarly , indicates that is generated by the recombination of the with the ( indicating that this f is the second product of splicing ) .just indicates that may be either the first splicing product or the second splicing product .the splicing scheme(process ) is denoted by . for a splicing process ,one requires two graphs and a splicing rule .the set of all graphs generated by splicing and using the splicing rule is denoted by .similarly , , are meant accordingly .the graph splicing system , where 1 . a finite set of simple , unlabeled graphs , called the set of axioms .a finite set of splicing rules .the underlying splicing scheme is , .+ the set of all graphs generated by splicing all pairs of the graphs of with all splicing rules of ( the graph language of the splicing system ) , in the dna recombination , when some restriction enzymes and a ligase are present in a test tube , they do not stop after one cut and paste operation , but they act iteratively .the products of a splicing again take part in the splicing process . for an iterative splicing among the graphs, the axiom set should contain many copies of the same element .ordinary sets are composed of pairwise different elements , i.e. , no two elements are the same .if we relax this condition , i.e. , if we allow multiple but finite occurrences of any element , we get a generalization of the notion of a set which is called a _ multiset_.we assume that our axiom set is a multiset .that means infinitely many copies of the elements of the axiom set will be present in the set , which facilitates the elements to take part in the splicing process iteratively .even the product of a splicing process will also be available infinite number of times . to make a graph splicing system into an iterated graph splicing system ,the only requirement is to make the axiom set into a multiset such that infinitely many copies of the elements of are in .the graph language of an iterative graph splicing system , where is a multiset such that infinitely many copies of the elements of are in , is defined as where consider the graph splicing system ,[2,3])\}) ] .the power of the splicing rule is 2 . in each splicing process, new graphs will be generated .so , will have a total of 16 new graphs .of these , some of the graphs are isomorphic to each other .it is found that the non - isomorphic graphs in are and a graph , where .the above graph is not a simple graph ( it has a multiple edge between the vertices and ) .this makes us to conclude that the splicing of two simple graphs need not be simple .given a graph , the power of the cutting rule ] * case(ii ) : * + we proceed similarly as the case(i ) .the number of edges that come under ( 1),(2 ) and ( 3 ) is number of edges that come under ( 4 ) is hence , the total number of edges cut by ] , we counted the number of edges whose one end is in and the other end is in by deleting some edges from the set which is the set of edges whose left end is in .instead , we can have the set to be the set of edges whose right end is in and proceed in an analogous way , as in the proof of proposition 1 .we get the power of the cutting rule ] and $ ] are termed successive cutting rules ] with power such that \in a}ecut_g ( [ i , i+1 ] ) \neq \phi\ ] ] .let and be any two isomorphic graphs .let = , for any splicing rule .then f is isomorphic to ( or h ) if and only if the order of the graph and the order of ( or the order of ) are the same .if for a graph , there exists only one cutting rule whose power is equal to the size of the graph , then is bipartite .as graphs are better suited for representing complex structures , a model for splicing the graphs , graph splicing system is introduced , which can be applied to all types of graphs .though the graph splicing is introduced as a new operation among the graphs , studying the computational effectiveness of this graph splicing system is an important area to explore .one can introduce various parameters like the number of graphs in the axiom , the number of splicing rules , power of the splicing rule etc . , and finding the minimum value of the parameters for which the graph splicing system is still computationally complete .besides , as a new line of thinking , a nice investigation to bring out the utility of the splicing in graph theory is worth .9 searls , d.b . , 1992 , the linguistics of dna , _ american scientist _ , * 80 * , 579 - 591 .colaldo - vides , j . , 1991, the search for a grammatical theory of gene regulation is formally justified by showing the inadequacy of context - free grammars , _ cabios _ , * 7 * , 321 - 336 ., 1987 , formal language theory and dna : an analysis of the generative capacity of specific recombinant behaviours , _bull.math.biology_ , * 49 * , 737 - 759 .paun , gh . , 1996 , on the splicing operation , _discrete applied mathematics_,*70 * , 57 - 79 .paun , gh . ,rozenberg , g and salomaa , a . , 1996 , computing by splicing , _ theoretical computer science _ , * 168*(2 ) , 321 - 336 .freund , r ., 1995 , splicing systems on graphs , _ ieee conf . on intelligence in neural biological systems , herndon - washington_ , 189 - 195 .culik ii , k . , and harju , t . , 1991 , splicing semigroups of dominoes and dna , _ discrete appl.math_ , * 31 * , 261 - 277 .rama , r. and umaraghavan , splicing array systems , _ intern .j. computer . math . , _ , * 73*,167 - 182 .rama , r. and krishna , s.n ., 1999 , contextual array splicing systems , _ proceedings spire99 criwg99 _ , 168 - 175 .krithivasan , k. , chakravarthy , v.t . and rama , r. , 1997 , array splicing systems , lncs 1218 , 346 - 365 ., 2001 , dna computing in vitro and in vivo , _ futute generation computer systems_,*17 * , 823 - 834 salomaa , a . , 1997 , computability paradigms based on dna complementarity , _ in v.keranen(ed ) , innovation in mathematics , proc.,second international mathematics symposium , computational mechanics publications , southampton and boston,_15 - 28 .bondy , j.a and murty , u.s.r . , 1976 ,graph theory with applications _ north - holland , new york_. hopcroft , j.e and ullman , j.d ., 1979 , introduction to automata theory , languages and computation , _ addition - wesley , r reading , mass.,_.
|
the string splicing was introduced by tom head which stands as an abstract model for the dna recombination under the influence of restriction enzymes . the complex chemical process of three dimensional molecules in three dimensional space can be modeled using graphs . the graph splicing systems which were studied so far , can only be applied to a particular type of graphs which could be interpreted as linear or circular graphs . in this paper , we take a different and a novel approach to splice two graphs and introduce a splicing system for graphs that can be applied to all types of graphs . splicing two graphs can be thought of as a new operation , among the graphs , that generates many new graphs from the given two graphs . taking a different line of thinking , some of the graph theoretical results of the splicing are studied .
|
it is already more than 10 years ago that the symbolic manipulation program form was introduced . at the time it was constructed more or less as a successor to schoonschip which , due to the limitations of its language of implementation , became harder and harder to use .in addition the continuous development of methods of computation in many fields of science required a rapid expansion of the possibilities of the symbolic programs .admittedly programs like reduce , maple and mathematica offer a host of possibilities combined with extensive libraries , but the speed of schoonschip was always something that set it apart from the other programs .hence the task was to create a program that would be at least as fast as schoonschip and be fully portable . in particular it should be usable on the fastest computers that are suitable for this kind of work ( eg .vector processors will rarely give great benefits to symbolic manipulation in general . as we will see this is completely different for parallel processing ) .this resulted in version 1 of form . due to more modern algorithms and the benefits of a second generation approachit is actually somewhat faster than schoonschip when the problems remain limited in size and much faster when the expressions become large .compared to the more popular computer algebra systems the difference in speed and use of memory depends largely on the type of the problem .this is due to a completely different philosopy of design and use .typical factors are between 10 and 100 , but there have been rare cases for which the factor was considerably less and cases for which the factor was considerably larger .such a factor translates into a timeshift with respect to when a given calculation can be done in practise with other programs . over the past yearsthe computer industry has given us on average every three years a factor 4 in increase in capabilities .hence for form this translates into a shift of more than 5 years . as a consequencemost of the calculations in quantum field theory for which the available computer resources are a consideration have been using form over the past years .of course the development of form has continued .version 2.0 was introduced in 1991 .it contained a number of improvements that were inspired by the use of the program in a number of actual research projects .since then regular small improvements were made , dictated by more and more complicated projects .after a number of years it became clear that a number of fundamental internal changes would enhance the capabilities very much and hence work on version 3 was started .most of the code of version 2 was replaced , introducing better structured ways of dealing with internal objects .even though the language of implementation is still regular c , the methods used are in many cases those of object oriented programming .it was of course important to do this in such a way that the speed and compactness of the expressions would not suffer from this .this has been achieved .after this revamping the new facilities were added .some of them are completely new in the field of symbolic manipulation .this does not set the claim that such things are impossible with other programs .it just means that the form approach to some things is far more direct and often easier to program , even if some people may not like the language . in this paperthe most important new features are described shortly . in most casessome short examples of code are given with the corresponding output .it should be understood that this paper is not a manual . because a number of packages comes with the form distribution ,the examples to illustrate the features come mainly from these packages .in addition there are examples from some incomplete future packages . to improve the understanding of the above ,first a very short description of form is given .there is also a short section on a parallel version of form .it looks like this will be a major new feature that is currently still under development but results have been obtained already with one of the packages . the actual target is to run nearly unmodified form programs ( the mincer package that has been used already was not modified at all ) on a number of processors in such a way that to the user it looks like a much faster sequential machine .this is very useful for among others the development cycle of big programs .the final section discusses the availability of form . in the appendixwe give a short description of the packages with some examples of their use .here a very brief overview of form is given .for a complete introduction one can consult the manual which contains a complete tutorial and a reference section .form is not an interactive system .it runs in batch . in unix termsone would say that form is a filter . for large programsthis is the natural mode anyway .it also allows the user to prepare programs with a familiar editor .the editor used by the author is stedi which has a folding capability that is recognized by form .the sources of this editor are available in the form distribution as well .if everan interactive version will be constructed , it will be based on this editor .the language of form uses strong typing .this means that all objects that will be used in a program will have to be declared .this allows form to use a number of properties of such objcets and this adds to the speed of the program .algebraic objects in form are expressions , symbols , functions , vectors and indices . in additionthere are -variables with a value that is the match of a wildcard variable .let us have a look at an example : .... # max ) max ' ... # enddo .... here we give ` max ` , provided it is at least -100 .at this point we can use this value as a parameter in a do loop .note that the .sort is essential , because if we were to omit it , the do loop would have the bounds 1,-100 because the do loop instruction would be translated before any algebraic manipulation would have taken place ( first the complete module is translated , then execution takes place ) .it is also possible to manipulate the contents of the -variables are kept in memory .this means that the user should not try to see them as a substitute for regular expressions and have some of them grow to a large size .it would work , but performance might suffer when the physical memory is exhausted in the same way that mathematica and maple become rather disagreeable when they need more memory than is available . when there are several expressions active at the same time , sometimes there is the need to temporarily put some of them ` out of the way ' .form has several mechanisms for this .the skip statement would just skip the indicated expression(s ) by copying them directly from the input to the output of the module .this is not always efficient .in addition one has to specify this in each module seperately .a more basic solution is to copy the expression(s ) to a special location where they remain suspended until called back .this is done with the hide and unhide statements .the hidden expressions can still be used in the right hand side of expressions in the same way as fully active expressions can be used .this is a faster mechanism than the one that is used with stored expressions . .....sort hide f1,f2 ; local f4 = f1+f3 ; .sort more modules .sort unhide f1,f2 ; i d x = y+1 ; .... here f1 and f2 are not operated upon for a few modules .then they are called back and they are immediately active again .this means that the substitution of x also holds for the terms of f1 and f2 .there is another mechanism when one or a few statements should be applied only to a limited number of expressions .this concerns a new option of the if statement : .... if ( expression(g ) = = 0 ) ; statements endif ; .... in this case the statements are applied to the terms of all active expressions except for the terms of the expression g. this is a preprocessor controled instruction that can write either to regular output , to the log file or to any named file .it has a format string as in the printf function of the c language .it specifies what should be printed .there are some examples in which complete procedure files are constructed this way for later use in the same program .the new feature of the print statement is a mode like printf in the c language that can print information during execution .it can handle the printing of messages and even individual terms or min1 = 1000 ; min1 ) min1 ; i d x1 = x+1 ; i d x = x2 - 2 ; sort ; if ( count(x2,1 ) < min2 = count_(x2,1 ) ; sort ; multiply ( [ x+2]/x2)^ -variables could cause problems because their eventual contents depend on the order in which terms passed .if each processor keeps its own version of a max = -100 ; if ( count(x,1 ) > max = count_(x,1 ) ; .... which tries to find the maximum power of x. the various processors may have a different value in the end .hence form should not parallelize modules in which -variable is either a sum , a maximum , a minimum or completely unimportant .in such cases form knows what to do with the results and the module can be run in parallel after all .it should be realized that this is all very new .hence it could well be that many new facilities will be developed here and that there may be more ways in which future versions of form can decide about parallelization .any statement that concerns the parallelization will be completely ignored by a sequential version of form .this means that when a program is being prepared one could or should include already some of the indications for the parallel version .this way the upgrading to a parallel machine would be rather smooth .whereas version 2 of form has been commercial , it was judged that the scientific community as well as the reputation of the author would benefit more from a free distribution .hence , starting version 3 , form will be freely available again . for the momentthis will involve a number of binary executables of the program .amoung these will be at least executables for linux on pc architectures and for alpha processors running unix .more binaries will become available in the near future. the manual will be available both as a postscript file and as a .pdf file .this will allow the average user to obtain the system and print out its manual as well as view it on the computerscreen .of course it is hoped that this development will not only improve the visibility and popularity of form , but will also lead to more possibilities to improve form itself .the potential for this depends of course on people in charge of jobs knowing about the importance of form in the science community .hence it is expected that people who use form for scientific publications will refer to the current paper . this way the use of form can be ` measured ' .the author hopes that the free distribution in combination with a proven popularity will lead to more people being involved in the development of form itself .there is still a whole list of potential improvements waiting for implementation .undoubtedly many users will have their own suggestions .also more packages would be appreciated .the form distribution contains : * executables for a variety of computers . * the conv2to3 program with its sourcesthis program converts old form code into new form code .* a postscript and a pdf version of the manual and a tutorial by andr heck . * a number of packages .currently these are ` summer ' , ` harmpol ' , ` meltran ' , ` color ' and ` mincer ' .* the sources of the stedi editor .this editor works on bsd 4.0 systems and systems that still have an old compatibility mode .it also works under linux .there is a manual with it .originally it was an editor for the atari st computers , made to work as well under ms - dos .then it was ported to bsd unix where it did run inside terminal windows . under linux and x- windows it has recovered most of its old ease of use . * the sources of the minos database program for organizing large calculations .there is a small manual with it .the program should work on nearly any system .the database files should be system independent . * the axodraw system for including simple drawings and figures inside latexfiles .this includes the manual .the files can be obtained from the www pages in http://www.nikhef.nl/ .disclaimer : the files are provided without guarantee whatsoever . if some commands do not work as mentioned in the manual or as the user thinks they should work , the user should submit a bug report .this may or may not be reacted to depending on circumstances .acknowledgements : the author is thankful for many comments during the development of form .many people contributed in the form of bug reports and useful suggestions .some people however stand out .denny fliegner and albert rtey for developing the parallel version of form ( with support of dfg contract ku 502/8 - 1 ( _ dfg - forschergruppe ` quantenfeldtheorie , computeralgebra und monte carlo simulationen ' _ ) , geert jan van oldenborgh for being the main guinea pig during the early stages of the project , andr heck for writing a good quality tutorial , walter hoogland for allowing me to spend so much time on the project in its early stages when success had not been proven yet .a few packages are provided with the form distribution . these packages are useful not only for their field of applicability .they also serve as examples of form packages and programming techniques . in this paperwe took most of the examples from them . herewe give a very short description of what these packages are good for .more information is in a number of papers and in the future there may be some manuals .this packages deals with harmonic sums .a full description of these sums is given in ref .we give a short overview .the harmonic series is defined by in which .one can define higher harmonic series by with the same conditions on .the and the are referred to as the indices of the harmonic series . in the program we will use ` s(r(j1, ... ,jp),n ) ` .there is an alternative notation in which the indices are one of the values 0,1,-1 .a zero indicates that actually one should be added to the absolute value of the nonzero index to the right of the zero as in : the weight of a sum is sum of the absolute value of its indices . in the notation withthe 0,1,-1 it is equal to the number of indices .the harmonic sums with the same argument form a ( weight preserving ) algebra : the product of two sums with weights and respectively is a sum of terms , each with a single sum of weight .the routine ` basis ' in the package implements this property .example : .... f = - s(r(-3,2,3),n ) - s(r(-3,3,2),n ) + s(r(-3,5),n ) + 2*s(r(-1,2,2,3),n ) + s(r(-1,2,3,2),n ) - s(r(-1,2,5),n ) - s(r(-1,4,3),n ) - s(r(2,-4,2),n ) + s(r(2,-1,2,3),n ) + s(r(2,-1,3,2),n ) - s(r(2,-1,5),n ) + s(r(2,3,-1,2),n ) ; .... f = - s(r(-1,-3,2,3),n ) + s(r(-1,-2,-2,-1,3),n ) + s(r(-1,-2,-1,-2,3),n ) - s(r(-1,-2,2,-4),n ) + s(r(-1,-2,2,-3,-1),n ) + s(r(-1,-2,2,-2,-2),n ) .... + s(r(1,2,-1,-3,-2),n ) + s(r(1,2,-1,2,-3),n ) + 2*s(r(1,2,-1,3,-2),n ) + 3*s(r(1,2,-1,4,-1),n ) - 4*s(r(1,2,-1,5),n ) + s(r(1,2,3,-1,2),n ) ; .... when there are relations in addition to the regular algebraic relations .they allow all the sums in infinity to be expressed in terms of a rather small number of trancendental numbers of the euler zagier type . in order to obtain these expressionsone has to solve for each weight a number of equations .this number grows exponentially with the weight .the system has been solved exactly up to weight 9 and the file with the solutions is part of the distribution .it is about 20 mbytes . in the file` summer6.h ' only the solutions up to weight 6 have been included .the programs that solve these systems of equations are also part of the distribution . in the case of weight 9there were more than 40000 equations in 13122 objects .the divergent sums were part of this because they can all be expressed properly into combinations of the basic divergence .this database of sums is rather important for whole categories of integrals as we will see in the harmpol and meltran packages .it should be noted that at the moment of writing , no constructive algerithms are known to determine these constants one by one .the only ` algorithm ' that comes close is the numerical evualuation of each individual sum , making an ansatz about which constants form a minimal set and then fitting all sums with a special numerical program .this has been done for and in a restricted way for some higher values of .the weight one harmonic polylogarithms are given by : for the higher weights we first define then with an array with w zeroes while if the sequence of w elements 0,1,-1 is not equal to the weight of the harmonic polylogarithms is the number of indices .the argument can in principle take any complex value , but depending on the indices there may be a cut for real values or .these cuts are due to powers of or .such powers can be extracted , resulting in combinations of these logarithms and -functions without these cuts .this will be needed for the mellin transforms .it should be noted that we can use also the notation of the harmonic sums for the indices ( with values beyond the set 1,0,-1 ) .also the harmonic polylogarithms form a weight preserving algebra .in addition there are extra relations for .one has to be rather careful with the divergent elements at .the powerseries expansion of contains harmonic sums .the algebraic relations for general are related to the relations of the harmonic sums and the extra relations in are related to the general algebraic relations for the harmonic sums .the transformations , and give some interesting relations that allow for a reasonably fast numerical evaluation of the function over most of the complex plane ( excepting the regions around ) .the values in can be expressed in terms of the harmonic sums in . because a weight w -function is an integral over a denominator and a weight -function this gives us whole categories of integrals that were till now extremely hard to solve .the results will be in terms of a limited number of trancendental numbers , but the relations for at will allow them to be expressed in terms of relatively fast converging series .hence these numbers can be evaluated to any precision when the need arises ..... # - # include harmpol.h off statistics ; .global local f = h(r(1,0,1),x)*h(r(-1,1,-1),x ) ; # call hbasis(h , x ) repeat i d h(r(?a , n?!{1,0,-1},?b),x ? ) = h(r(?a,0,n - sig_(n),?b),x ) ; .sort on statistics ; print + f ; .end .... f = h(r(-1,1,-1,1,0,1),x ) + h(r(-1,1,0,1,-1,1),x ) + 2*h(r(-1,1,0,1,1,-1),x ) + 2*h(r(-1,1,1,-1,0,1),x ) + 2*h(r(-1,1,1,0,-1,1),x ) + 2*h(r(-1,1,1,0,1 , -1),x ) + h(r(1,-1,0,1,-1,1),x ) + 2*h(r(1,-1,0,1,1,-1),x ) + h(r(1,-1,1,-1 , 0,1),x ) + h(r(1,-1,1,0,-1,1),x ) + h(r(1,-1,1,0,1,-1),x ) + h(r(1,0,-1,1 , -1,1),x ) + 2*h(r(1,0,-1,1,1,-1),x ) + h(r(1,0,1,-1,1,-1),x ) ; .... the conversion of combinations of denominators , logarithms and polylogarithms in terms of -functions is usually a straightforward exercise in integration by parts , provided that it can be done at all .we give one example of an integral .... # - # include harmpol.h # call htables(6 ) cf int ; off statistics ; .global local f = ln_(1+x)^2*ln_(1-x)^2*ln_(x)/x*int(x,0,1 ) ; i d ln_(1+x ) = h(r(-1),x ) ; i d ln_(1-x ) = -h(r(1),x ) ; i d ln_(x ) = h(r(0),x ) ; # call hbasis(h , x ) i d h(r(?a),x)/x*int(x,0,1 ) = h(r(0,?a),1)-h(r(0,?a),0 ) ; i d h(r(?a , n?{1,-1},?b),0 ) = 0 ; * provided there is a nonzero index # do i = 1,6 i d h(r(n1?, ...,n`i'?),1 ) = htab`i'(n1, ... ,n`i ' ) ; # enddo .sort on statistics ; print + f ; .end .... this package does the mellin transforms and inverse mellin transforms between the harmonic polylogarithms and the harmonic sums .it allows calculations to be done in either space and then converted to the other , whatever is easiest .the definition of a mellin transform is : in which the function is supposed to be finite for when the factor is present .hence we see that the first step in the mellin transformation of a -function is the extraction of potential powers of .the mellin transform of a -function , divided by either or can be solved recursively and the result is a combination of harmonic sums in the mellin parameter .there is a one to one correspondence between the mellin transform of -functions of weight , divided by either or and harmonic sums of weight ( the mellin transform can also contain sums of a lower weight , but of the highest weight there is only a single term ) .hence the inverse mellin transform can also be obtained rather easily .the only complicating factor is that there will be terms in or .currently these can only be combined into a ` minimal ' representation if the weights are at most 9 .of course it should be clear that with these techniques one can do many integrals involving combinations of powers of , denominators with , , and polylogarithms with a large variety of arguments .this is usually a soft spot in most computer algebra systems .this package handles group invariants .the theory behind this is given in ref .all internal variables in this package have a name that starts with the characters col in order to avoid potential name conflicts . with this packageone can , for instance , compute the group invariants that are needed in complicated feynman diagram calculations in such a way that the values for a specific representation of a specific group can be substituted at a later stage .this makes calculations more general .a simple example of the use of this package is the calculation of a diagram with 14 vertices each having three legs , connected in such a way that the smallest loop contains 6 vertices .this diagram is unique .if the vertices all belong to the adjoint representation the program would be : .... # include color.h off statistics ; .global g g14 = colf(coli1,coli2,coli3)*colf(coli1,coli4,coli5 ) * colf(coli2,coli6,coli7)*colf(coli3,coli8,coli9 ) * colf(coli4,coli10,coli11)*colf(coli5,coli12,coli13 ) * colf(coli6,coli14,coli15)*colf(coli7,coli16,coli17 ) * colf(coli8,coli18,coli19)*colf(coli9,coli20,coli21 ) * colf(coli10,coli21,coli15)*colf(coli13,coli19,coli14 ) * colf(coli17,coli11,coli18)*colf(coli12,coli16,coli20 ) ; sum coli1, ... ,coli21 ; .sort # call docolor .sort on statistics ; print + f + s ; .end .... this was run on a pentium 300 .note that the customary f has to be given as colf etc .similarly the regular is given as colca .the values for the invariants for a given group can be found in the paper . as an examplewe give here the value of e8 : .... * * numbers for exceptional algebras * s [ na+2],eta ; i d cold644(colpa1,colpa2,colpa3 ) = 175/48*colca^7*colna/[na+2]^2 ; i d cold444(colpa1,colpa2,colpa3 ) = colca^6*colna/[na+2]^2 * ( 125/27 + 125/216*colna ) ; * * and for e8 * i d colna = 248 ; i d 1/[na+2 ] = 1/250 ; i d colca = 30*eta ; print + f + s ; .end .... this is a pure particle physics package .it concerns the evaluation of massless propagator type integrals with up to three loops .in addition there are routines for the expansion in ( fixed ) mellin moments .computation time for higher mellin moments can be rather large .in addition the disk requirements increase exponentially with the number of the moment .one needs to specify the topology of the diagram and a few other options .a simple example is given by .... # - # define scheme " 0 " # define topo " la " v p1, ... ,p8,q ; # include mincer.h off statistics ; .globall dia = 4*p7.p8 ^ 2/p1.p1/p2.p2/p3.p3/p4.p4 ^ 2/p5.p5/p6.p6/p7.p7/p8.p8*q.q ; .sort # call integral(`topo ' ) .sort on statistics ; multiply ep^3 ; i d ep = 0 ; print ; .end .... for more details one should have a look at the paper that comes with the package .this mincer package has been in use by the author for 9 years now . over the yearsit has steadily been improved and optimized .the code is basically code for version 2 of form .it is this package that was used to test the parallel version .l. euler , novi comm .petropol . 20( 1775 ) 140 .d. zagier , first european congress of mathematics , volume ii , birkhuser , boston , 1994 , pp .497 - 512 .borwein , d.m .bradley , d.j .broadhurst , hep - th/9611004 , the electronic journal of combinatorics , vol .2 ( wilf fetschrift ) , 1997 , # r5 ( http://www.combinatorics.org/volume_4/wilftoc.html ) .
|
version 3 of form is introduced . it contains many new features that are inspired by current developments in the methodology of computations in quantum field theory . a number of these features is discussed in combination with examples . in addition the distribution contains a number of general purpose packages . these are described shortly . # 1_#1 _ # 1_#1 _ ( # 1)(-1)^#1 ( # 1,#2 ) ( c # 1 + # 2 )
|
dispersal strategies occur over both short and long spacial scales . at all scales, it has been suggested that dispersal is a bet - hedging or risk - spreading strategy used by organisms to deal with heterogeneous , stochastic environments .however , dispersal and movement by individuals also have more concrete consequences for populations .it allows them to utilize new resources and areas , it connects separate populations within a metapopulation , and it may help maintain population and metapopulation stability and decrease extinction risk .dispersal into novel environments can also result in local adaptations and speciation .linyphiid , or money , spiders are one example of an animal that employs both short and long distance dispersal strategies . for money spiders ,long distance dispersal occurs as a mostly passive process known as ballooning . during ballooning ,the spider is able to float within air currents , suspended by a single strand of silk .nearly all linyphiid species have been observed ballooning , although ballooning propensity varies between species .although it is unknown how far a spider can travel by this method , observations of spiders ballooning over the ocean , far from land , place some bounds on what is possible .linyphiid spiders prefer to live in agricultural areas , such as field or pasture land , where they predominantly feed on aphids , although some species are generalist predators .since they are able to balloon into areas that have been disturbed by agricultural processes , it has also been suggested that linyphiid spiders may be important for controlling outbreaks of pests in these areas .however , the spiders are themselves sensitive to agricultural activities , such as harvesting or pesticide applications .additionally , since the early 1970s , observations indicate that populations of many linyphiid species have been decreasing , possibly due to climate change .since this decline seems to correlate most strongly with a reduction in days where the weather is appropriate for ballooning , the difference in population outcomes between species may be related to differing dispersal propensities .more specifically , particular dispersal strategies may allow some species to better cope with the agricultural landscape , which is characterized by a heterogeneous environment and fairly frequent high - mortality `` catastrophes '' .however , it is difficult to observe the details of both dispersal and life histories of the spiders directly , so another approach is needed .various models of spider ballooning have been developed . first developed a simple one dimensional fluid dynamics model of a single spider , and more recently proposed a stochastic model of the process in a turbulent flow . proposed a statistical model for the distances travelled by money spiders in different weather conditions parameterized with data from observations of spiders collected during ballooning .this model indicates that these spiders may be able to travel more than 30 km within a single day , which is within observed bounds .however , none of these models address the population consequences for this kind of very long distance dispersal .there have been previous models that have been constructed to address how dispersal strategies interact with life history strategies and field disturbances to influence linyphiid population levels . developed a very detailed individual based model ( ibm ) for one linyphiid species , erigone atra , within a two dimensional landscape .their model includes details of landscape dynamics ( including crop growth and weather , as well as different types of disturbance ) , stage structured life histories , and environmentally cued dispersal .they primarily focus on how variation in specific landscape activities ( such as crop rotation ) and landscape compositions effect population sizes .however , the detail and specificity of this model has drawbacks .many of the conclusions may not be generalizable to other species , and the shear complexity and computational power needed for this type of model can make exploration of the possible behaviors of this system much more difficult . developed a simpler one dimensional , deterministic model of a spider population composed of `` dispersers '' and `` non - dispersers '' in an agricultural landscape .however , like the model , the specificity of this model , particularly the use of very specific deterministic disturbances , makes it difficult to draw general conclusions about how metapopulation persistence is impacted by factors such as dispersal and life history strategies . in this paperi examine a simple stochastic model of a metapopulation of ballooning spiders within a heterogeneous environment .the primary goal of the study is to understand how dispersal strategies impact long term population persistence in the face of high levels of habitat disturbance and mortality .i approach the problem in the spirit of a population viability analysis , determining how populations characterized by different life history parameters and dispersal propensities may be more or less likely to go extinct within 110 years when faced with varying levels of catastrophic events .this time horizon is used as it would be a reasonable time frame for conservation targets .i begin by introducing the model in section [ sec : model ] , followed in section [ sec : sims ] by the simulation methods used to explore the model . in section [ sec : cart ] , i introduce classification and regression trees ( cart ) , which are used to analyze the simulation output .results for a baseline case and three variations are presented in section [ sec : results ] .section [ sec : disc ] concludes the paper with a short discussion .the model presented here is comprised of three portions : a population model with demographic stochasticity within an agricultural field ; a data driven dispersal model ; and a stochastic environment , incorporating field level catastrophes . the model is formulated as a quasi individual based model ( ibm ) . whereas a full ibm would follow each particular individual over their whole lifetime , here i employ a novel approach whereby individuals are only followed during the dispersal process population dynamics and catastrophes are not individual - based .this approach has computational benefits , especially when large numbers of individuals are being modeled .the landscape considered in this model is comprised of a one dimensional ribbon consisting of patches or fields , each of size , with periodic boundary conditions .each simulated landscape consists of 50 virtual fields , numbered sequentially from 1 to 50 .all fields are the same fixed size , km , and have the same fixed carrying capacity ( i.e. , i assume that carrying capacity is constrained by space availability within a field ) .in discrete time the number of individuals in the patch changes as where are the number of deaths in the patch , the number of births , the number of emigrants leaving the patch , and the number of spiders successfully immigrating to the patch .figure [ f : model ] shows a diagram of the model flow at each time step , which is explained in more detail below .each patch or field is one of types .field types are characterized by their `` quality '' , i.e. , by the population birth and death rates within the field .more specifically , spiders in high quality fields could reproduce more quickly or are less likely to die from intrinsic mortality than those in poor quality fields .births and deaths are modeled using simple stochastic logistic growth such that density dependence acts to regulate reproduction and recruitment into the adult population . in this case , at each time step a spider in field , of type , dies with a probability and produces a single ( adult ) offspring with probability where is related to the traditional carrying capacity by . in other words , i assume that only reproductive rates ( and not death rates ) are density dependent . thus the expected number of births ( here , recruited adults ) and deaths in a single field are given , respectively , by where are the number of spiders that do not disperse at time .two hypotheses of dispersal are considered in the model : density independent and density dependent .thus the number of spiders emigrating or dispersing from the field at time can either be a fixed proportion of the population in the patch at time , or can vary with population density as where is the carrying capacity in the field , and where we constrain .thus , as the proportion of individuals dispersing is the same for both the density dependent and density independent cases . when spiders are less likely to disperse when there is density dependent dispersal than density independent dispersal , and vice versa for the case when .during dispersal , emigrants avoid mortality in the patch but die with probability .the mortality during dispersal could include multiple factors such as predation or desiccation .however , for simplicity here i assume a constant daily mortality rate while dispersing .spiders also can not reproduce as they disperse .the number of spiders that immigrate into the patch is given by the sum of the spiders that leave all the fields ( ) , survive dispersal , and consequently arrive in the field .the dispersal kernel describes the probability that a spider starting in field lands in field on day given parameters , . the data - driven model used to generate the dispersal dynamicsis presented in section [ disp_mod ] .spiders will only attempt to disperse under favorable weather conditions .i assume that daily conditions are good for dispersal with some fixed probability , . in other words , out of days ,the number of days with conditions favorable for dispersal , , is binomial with success probability : . whenever conditions are favorable the numbers of spiders that attempt to disperse are given by equation ( [ eq : e1 ] ) or ( [ eq : e2 ] ) , and when conditions are not favorable in every field . in the fields , `` catastrophes '' , i.e. , mortality events that wipe out significant proportions of spiders in a particular field , can occur .a catastrophe with mortality rate occurs on a given day with probability .catastrophes occur after dispersal has begun ( so that dispersing spiders can escape catastrophes ) but before births or ( intrinsic ) deaths .in addition , all parameters that determine dispersal behaviors or population dynamics are fixed and constant through time .i am primarily interested in how variation of four parameters , given the other parameters as fixed ( see table [ tb : params ] and section [ disp_mod ] ) , influences the probability of extinction .specifically i look at : the probability of catastrophe , ; the probability of weather suitable for flying , ; the probability that a spider disperses in good weather , ; and the mortality rate experienced during flying , .in addition , the dispersal probability can be either density dependent or density independent .the catastrophe rate , , is regarded here as being primarily human induced mortality , for example due to application of pesticides in a field .two of the parameters , and , can be viewed as environmentally determined parameters .the parameter , which may be decreasing for these spiders due to climate change , constrains the opportunities for dispersal into new habitats , and the ability for spiders employing any dispersal strategy to escape local mortality events .the mortality rate during dispersal , , includes mortality from various factors , such as predation and desiccation .thus the spiders must weigh the risks of dispersing against the risks of catastrophes or benefits of reproduction if remaining in a field .the final parameter , , together with the options for density dependence or not , thus determine what i consider the evolved `` dispersal strategy '' ..parameters and their values or ranges in for the baseline simulations[tb : params ] [ cols="^,^,^",options="header " , ] i begin with the baseline case , as described in section [ sec : bl ] .the pruned tree for the full data set is shown in figure [ fig : bigtree ] .even with the appropriate pruning the tree is fairly complicated .first , we notice that the probability of catastrophe , , is the most important determinant of extinction probability , as the initial branching depends on , and there are more branchings in the tree that depend on than any other factor .the tree can be approximately viewed as having three regions with low ( ) , medium ( ) , and high ( ) probability of catastrophe , that loosely correspond to low , medium , and high probability of extinction within 110 years . ) .the splits from each node follow the rule left = true .density values of indicate density independent and dependent dispersal , respectively .leaves with extinction probability of are indicated with circles , and those with with squares . ] for most of the parameter space , the particular dispersal strategy employed , ( i.e. , the probability of dispersing given good weather , , and density dependent or independent dispersal ) is not particularly important .when the risk of catastrophe is low , as long as the inflight mortality is low enough ( ) , populations employing any dispersal strategy are predicted to have a low probability of extinction .however , when in - flight mortality is higher than this , populations are only likely to persist when catastrophe levels are are low ( ) and , simultaneously , either dispersal propensity is not too high ( ) or density dependent dispersal is used ( which reduces the effective dispersal propensity as long as the population is below the carrying capacity ) . in other words , when dispersal mortality is very high , frequent dispersal increases the probability of population extinction , as one might expect . at intermediate catastrophe levels ( here )populations only persist in a very narrow range of circumstances where the probability that a day has good weather for dispersal is greater than 44% ( more than 160 days per year ) and , simultaneously , the inflight mortality rate is not too high ( ) . in other words , for the population to persist under intermediate disturbance , there need to be adequate opportunities for at least some of the spiders to disperse and survive to reach a new field . outside of this area of the parameter space , the probability of a population persisting , particularly for catastrophe levels of over , is very low ( 5 - 20% ) .figure [ fig : set2 ] shows the results for the best / worst scenario , which is characterized by high variance in the birthrates .first we notice that , as in section [ sec : bl.results ] , the strategy employed by the spiders is not particularly important for determining the extinction probability ( i.e. , the tree only has a few splits , near the leaves , that depend on either or `` dense '' ) . againthe most important consideration is the level of catastrophe .however , notice that in this case the population is actually less sensitive to low levels of disturbance , where the population has a very good chance of persisting ( extinction probabilities of to 0.1 ) , as long as .this is likely due to the increased abundance of high quality fields . like the baseline case ,the exception to this is when both in - flight mortality and dispersal propensity are high ( and , respectively ) and individuals utilize density independent dispersal . in this case, populations have a greater than 70% chance of going extinct . ) the splits from each node follow the rule left = true .density values of indicate density independent and dependent dispersal , respectively .leaves with extinction probability of are indicated with circles , and those with with squares . ]figure [ fig : set3 ] shows the results for the many moderate scenario , characterized by lower variance in the birthrate .the resulting tree is nearly identical to the best / worst scenario .however , in this scenario , populations are slightly more likely to go extinct ( extinction probability of ) when both the probability of catastrophes and inflight mortality rates are low ( , ) compared to the best / worst case ; instead the extinction probabilities more similar to the baseline . )the splits from each node follow the rule left = true .density values of indicate density independent and dependent dispersal , respectively .leaves with extinction probability of are indicated with circles , and those with with squares . ]( mean birthrate of , or 57% of the baseline mean ) and death rate parameter ( death rate 2.5 times higher than the baseline ) and with a uniform distribution of field types as shown in figure [ fig : fields ] ( a ) .( see section [ sec : p2 ] ) the splits from each node follow the rule left = true .density values of indicate density independent and dependent dispersal , respectively .leaves with extinction probability of are indicated with circles , and those with with squares . ]figure [ fig : set1 ] shows the results for the final scenario explored in this paper , the population ii case . as one would expect , since the intrinsic death rate is higher than the previous cases and the mean birth rate is lower , much more of the parameter space results in the extinction of the population .the threshold level of catastrophe that results in a low probability of extinction is , which is quite a bit lower than any of the previous cases .even with this low level of catastrophe , the population is still only likely to persist if the dispersal mortality is low enough ( ) , the probability of dispersing on a day with good weather is not too high ( ) , or density dependent dispersal is utilized .the results from all of the scenarios explored above show many similarities .for instance , much of the tree structures , such as primary splits depending on , correlations between and , and density dependence only being important in limited circumstances . in each case, the probability of catastrophe determines the probability of extinction within 110 years more than any other factor .the similarities between the left - most branch in all four of the scenarios also indicates that the product of in - flight mortality rate and dispersal propensity , which together determine the expected proportion of individuals within a field that will die on a day with good conditions for dispersal , may be an important threshold for determining extinction probability for a given catastrophe level .however the quantitative results ( especially locations of splits ) exhibit more variation .in particular , the results indicate that the values of population parameters ( birth and death rates ) are considerably more important for determining population persistence at a given catastrophe level than the relative abundance of the different types of fields , which in turn has a greater impact than changes in the dispersal strategy ( dispersal propensity and density dependence ) .the results of the model presented in this study suggest that , although a general dispersal ability is important for the persistence and growth of linyphiid spider populations , the exact details of this dispersal strategy , i.e. , whether dispersal is density dependent and the particular probability of dispersing on a day with appropriate weather , are less important than other factors in determining persistence in the face of field level catastrophes .instead , actual catastrophe probability seems to be the most important factor in determining the extinction probability , given landscape and life history parameters .as demonstrated , one may observe thresholds in the catastrophe level where the population switches from being very unlikely to being very likely to go extinct .for instance , results for populations with life histories and landscape distributions described by the parameters in the baseline simulation suggest that if the daily probability of a catastrophe is greater than 22% , then there is greater than 80% chance of extinction within 100 years . otherwise , there is less than 10% chance that the population would go extinct. the baseline results also make it apparent that reducing the catastrophe level further can help to mitigate the effects of mortality during dispersal .although the model presented here fairly simple , it is able to capture patterns that have been observed in more complex models .for instance , the model developed by exhibited similar thresholding behavior in population size / persistence with catastrophic events , specifically landscape wide pesticide application ( all fields affected ) .they found that if all the fields were of the same type , the population could persist ( i.e. , the population was ) if the field was sprayed no more than once per year with a pesticide that caused 90% mortality . by including a second field type that is less ideal for habitat , but is not sprayed , the population remains large even with higher frequency of pesticide application in other fields . in the current studywe find a similar increase in persistence by limiting the average catastrophe rate across fields , instead of explicitly including refuge habitats .this indicates that for highly dispersive species , undisturbed land for refuges may not be as necessary for population persistence as lower mean disturbance rates , although providing refuges may be an efficient method for reducing the mean disturbance rate .this is a similar result to one reported by who found that some habitat needed to be available for spiders at all times , although the habitat did not need to be permanent . on the other hand , by using a more simple model for some aspects , such as the life history , i have been able to focus more on the more general question of the relative importance of dispersal strategy compared to other population and landscape factors for population persistence in the face of catastrophes .although many of the qualitative results of this model did not depend upon the life history and landscape parameters , the quantitative predictions and , more importantly , the threshold catastrophe levels do depend upon the assumptions about the distribution of field quality in the landscape , reproductive rates , and baseline mortality . on the other hand ,the particulars of the dispersal strategy adopted by the spider ( such as density dependence or dispersal propensity ) were not particularly important under most circumstances .this is in contrast to , who found a fairly strong dependence between population size / extinction and the proportion of individuals dispersing .this difference is could be due to a number of different factors .one possibility is that this the effect of dispersal is less apparent in the current model due to significant stochasticity in all of the model processes .another is that the difference could be an effect of the stage structured population dynamics , which may result in the particular amount of dispersal being more important in recovery from a catastrophe . a third possibility is related to the fact that the optimal proportion of dispersers in the study was also strongly related to the proportion of non - habitat patches within the landscape .this factor changes the risk of mortality while dispersing , while simultaneously altering the population reproduction parameters , and seems to be more important in determining the maximum population level than the other factors they explored .a final possibility is that the difference could also be due to the fact that assume that each individual spider is either a `` disperser '' or `` non - disperser '' for its entire life - cycle .this factor may also be part of why , and draw conflicting conclusions about the effect of field rotations on the population .if a portion of the population are `` non - dispersers '' , then rotating a field would effectively increase the catastrophe level for a large portion of the population , since these individuals can not escape a dramatic change in mortality due to the rotation by dispersing .although i do not deal with rotation explicitly in this model , i expect it would have a similar , though mild , effect here , as long as the rotations do not change the overall distribution of fields in the landscape dramatically .since there is such a strong interaction between the effect of population parameters and catastrophe level , the current study suggests that the current patterns of decline are likely to be due to a combination of both changing life histories and agricultural practices ( field composition and catastrophe level ) . in order to preserve or increase spider populations in the future, we may want to suggest conservation measures that seek to curb the levels of human induced catastrophes in the environment .the observed thresholding behavior in the model indicates that the development of a simple guideline may be possible . for the parameters explored here the thresholds were in the 20% range . in other words , 20% of fields experience catastrophic mortality on a given day , and in a single field we expect nearly 10 weeks worth of high mortality days each year .although pesticides applied to fields can remain toxic to spiders for more than two weeks after application , and other types of disturbances also cause significant mortality , the predicted threshold seems to be fairly high .however , this value depends fairly strongly on model parameters , especially the population birth and death rates .thus , more observational data on the reproductive capabilities of target species within various types of agricultural fields , and how these may be affected by climate change , would be most useful for estimating this threshold .data gathered to estimate different dispersal behaviors / propensities or changes in the proportion of days that are suitable for ballooning would be less useful . in the current simulations ,the effect of a reduction in the number of habitat " fields has not been explored .this is partly because the effect of reducing carrying capacity , , on metapopulations is fairly well understood .the addition of `` non - habitat '' fields at random into the landscape , without reducing the total , would be equivalent to raising the level of mortality during dispersal .the current model focuses on the case where there are no spatiotemporal correlations in either catastrophes or reproductive schedules .it may be that these kinds of correlations could reduce the tolerance of a population to disturbance , or make other dispersal strategies , such as ones signalled by external factors , more important . including these factorsthis would be an important aspect of future work .l. r. j. was funded by bbsrc grant d20476 as part of the national centre for statistical ecology .thanks to : george thomas for unpublished data and biological expertise on linyphiid life histories ; bobby gramacy for advice on statistical methods ; ian carroll for comments on an earlier draft ; and two very helpful and thorough reviewers .charles darwin ._ journal of reseraches into the natural history and geology of the countries visited during the voyage of hms beagle around the world _ , chapter viii .everyman s library , dent , london , 1906 .l. r. johnson , c. f. g. thomas , p. brain , and p. c. jepson .correction to : aerial activity of linyphiid spiders : modelling dispersal distances from meteorology and behavior ._ journal of applied ecology _ , 2007 .j. michael reed , l. scott mills , john b. dunning jr ., eric s. menges , kevin s. mckelvey , robert frye , steven r. beissinger , marie - charlotte anstett , and philip miller . emerging issues in population viability analysis . _ conservation biology _ , 160 ( 1):0 719 , februrary 2002 .a. m. reynolds , d. a. bohan , and j. r. bell . ballooning dispersal in arthropod taxa with convergent behaviours : dynamic properties of ballooning silk in turbulent flows . _ biology letters _ , 20 ( 3):0371373 , 2006 .url doi:10.1098/rsbl.2006.0486 . c. f.g. thomas , e. h. a hol , and j. w. everts . modelling the diffusion component of dispersal during recovery of a population of linyphiid spiders from exposure to an insecticide . _ functional ecology _ , 40 ( 3):0 357368 , 1990 . c. f. g. thomas , s. brooks , s. goodacre , g. hewitt , l. hutchings , and c. woolley .aerial dispersal by linyphiid spiders in relation to meteorological parameters and climate change .technical report available online , 2006 .url www.statslab.cam.ac.uk/steve/mypapers/thobghhw06.pdf .
|
linyphiid spiders have evolved the ability to disperse long distances by a process known as ballooning . it has been hypothesized that ballooning may allow populations to persist in the highly disturbed agricultural areas that the spiders prefer . in this study , i develop a stochastic population model to explore how the propensity for this type of long distance dispersal influences long term population persistence in a heterogeneous landscape where catastrophic mortality events are common . analysis of this model indicates that although some dispersal does indeed decrease the probability of extinction of the population , the frequency of dispersal is only important in certain extremes . instead , both the mean population birth and death rates , and the landscape composition , are much more important in determining the probability of extinction than the dispersal process . thus , in order to develop effective conservation strategies for these spiders , better understanding of life history processes should be prioritized over an understanding of dispersal strategies .
|
the naming game was first proposed by stells to describe the emergence of conventions and shared lexicons in a population of individuals interacting through successive conversations , but a number of variants of the model have also been introduced and studied numerically by statistical physicists , and we refer to section v.b of for a review of these different variants .the reason for the popularity of the naming game in the physics literature is that it is similar mathematically to traditional models in the field of statistical mechanics .the model studied in this paper is a biased version of the spatial naming game considered by baronchelli et altheir system consists of a population of individuals located on the vertex set of a finite connected graph that has to be thought of as an interaction network .each individual is characterized by an internal inventory of words that are synonyms describing the same object .all inventories are initially empty and evolve through successive conversations : at each time step , an edge of the network is chosen uniformly at random , which causes the two individuals connected by the edge to interact .one individual is chosen at random to be the `` speaker '' making the other individual the `` hearer '' .if the speaker does not have any word to describe the object then she invents one , whereas if she already has some words then she chooses one at random to be passed to the hearer .the conversation results in the following alternative : if the hearer already has the word pronounced in her internal inventory then this word is selected as the norm by both individuals all the other words are removed from both inventories otherwise the hearer adds the word pronounced to her inventory . based on numerical simulations , baronchelli et al . studied the maximum number of words present in the system as well as the time to global consensus , i.e. , the time until all inventories consist of the same single word .in contrast , we use the naming game to study whether a new word can spread into a population that is already using another word as a convention , i.e. , we assume that initially all the inventories reduce to the same single word , say word , except for one individual who also has another word in her inventory , say word . under the symmetric rules of the naming game ,the probability that becomes eventually the new convention tends to zero as the population size goes up to infinity so we look at biased versions of the naming game in which each word is attributed a fitness . in our model , the fitness of each word measures the fitness of each individual , that is how likely they are selected as a speaker rather than hearer , and also how likely each word is selected to be pronounced by bilingual individuals , i.e. , individuals who possess both words in their internal inventory .another significant difference between this article and previous works about the naming game is that it provides a rigorous analysis of the model on both finite and infinite graphs rather than results based on numerical simulations which are unavoidably restricted to finite graphs .also , we describe the dynamics in continuous time rather than discrete time , i.e , we assume that conversations occur at rate one along each edge of the graph , in order to have a model well defined on finite and infinite graphs . to describe our biased version of the naming game more formally , we let and denote the fitness of word and word , respectively , and set for all . note in particular that the average fitness represents the fitness of bilingual individuals .in each interaction , the individual playing the role of the speaker is chosen at random with probability her fitness divided by the overall fitness of the pair : when the neighbors are in state and , the individual chosen to be the speaker is the individual in state with probability . similarly , given that a bilingual individual is chosen as the speaker , the conditional probability that word is pronounced is equal to the relative fitness . in particular , each edge becomes active at rate one , which results in the following possible transitions for the states at the vertices connected by the edge : note that , when the fitnesses are equal , one recovers the transition probabilities of the unbiased naming game described above .we formulate the dynamics using two parameters to have natural notations that preserve the symmetry between both words , but we point out that the long - term behavior of the process only depends on the ratio .* mean - field model . * before stating our results for the spatial stochastic model , we look at its nonspatial deterministic mean - field approximation , i.e , the model obtained by assuming that the population is well - mixing .this results in the following system of differential equations : where denotes the frequency of type individuals for .the mean - field model has two trivial equilibria , namely which correspond to the configuration in which all individuals are of type and the configuration in which all individuals are of type , respectively .we say that word can invade word in the mean - field model whenever the system starting from any initial state different from converges to the trivial equilibrium .regardless of the ratio , the frequency of type individuals might decrease because the boundary is repelling , but looking instead at the difference between the frequency of individuals using word and word gives which is positive for all when and .this implies that there is no equilibrium other than the two trivial equilibria and that word can invade word for all .this condition is sharp in the sense that is locally stable when .indeed , the jacobian matrix of the system of differential equations at point reduces to where is a key quantity that will appear again later .the eigenspace associated with the eigenvalue zero is generated by the vector which is not oriented in the direction of the two - simplex containing the solution curves .the other two eigenvalues are which are both negative when .in particular , for all , the equilibrium is locally stable , therefore word can not invade .note that the obvious symmetry of the model also implies that both trivial equilibria are locally stable when .numerical simulations of the mean - field model suggest that , in this case , there is an additional nontrivial fixed point which is a saddle point , therefore the system is bistable : for almost all initial conditions , the system converges to one of the two trivial equilibria ( see figure [ fig : mean - field ] for pictures of the solution curves ) . * spatial stochastic model .* we now look at the spatial stochastic naming game . for the stochastic process, the main objective is to study the probability that word invades the population and is selected as a new linguistic convention when starting with a single bilingual individual and all the other individuals of type .note that , for non - homogeneous graphs , this probability depends on the location of the initial bilingual individual .also , letting be the state of the individual at vertex at time , and letting denote the law of the process starting with we define the probability of invasion as interestingly , our results indicate that the probability of invasion strongly depends on the topology of the network of interactions , suggesting that , on regular graphs , it is decreasing with respect to the degree of the network , a property that can not be captured by the mean - field model since it excludes any spatial structure . to begin with , we look at finite graphs .our first theorem extends the first result found for the mean - field model : word can invade word for all . [ th : finite - graphs ] assume that is finite and .then , .note that on finite graphs is always positive but might vanish to zero as the population size increases .in contrast , theorem [ th : finite - graphs ] shows more particularly that the probability of invasion is bounded from below by a constant that depends on the ratio but not on the number of vertices .the idea of the proof is to show first that a certain function of the number of type individuals and the number of type individuals is a supermartingale with respect to the -algebra generated by the process and then apply the optimal stopping theorem .our next result indicates that the invadability condition in theorem [ th : finite - graphs ] is sharp for complete graphs in the sense that the probability of invasion vanishes to zero as the population increases when . [ th : complete - graphs ] assume that is the complete graph with vertices .then , in the proof of theorem [ th : finite - graphs ] , the dynamics of the number of type and type is expressed as a function of the number of edges of different types .the complete graph is the only graph for which the number of edges of different types can be expressed as a function of the number of individuals of different types . also , one of the keys to proving the theorem is to use the fact that , on complete graphs , the number of individuals in different states becomes a markov chain .the combination of both theorems indicates that the dynamics of the naming game on complete graphs is well captured by the mean - field approximation .our next result shows more interestingly that this is not true for the process on the infinite one - dimensional lattice , suggesting that the critical value for the ratio of the fitnesses decreases as the degree of the graph decreases . [ th:1d ] in one dimension , whenever where the proof of theorem [ th:1d ] is based on the analysis of the interface between individuals in different states , which is only possible in one dimension .the bound is not sharp but our approach to prove the theorem together with the obvious symmetry of the model implies that the critical ratio is between and , which suggests that the critical ratio is equal to one : the probability of a successful invasion is positive if and only if .finally , we look at the naming game on regular lattices in higher dimensions . in this case , using a block construction to compare the process properly rescaled in space and time with oriented site percolation , it can be proved that the probability of invasion is positive for sufficiently large . [ th : lattice ] in any dimension , whenever is large enough . our approach can be improved to get an explicit bound for the critical value for but this bound is far from being optimal .we conjecture as in one dimension that the critical ratio is equal to one , which is supported by numerical simulations of the process .more generally , we conjecture that , on connected graphs in which the degree is uniformly bounded by a fixed constant , the critical value is equal to one in the sense that the probability of invasion is bounded from below by a positive constant that only depends on , in disagreement with the mean - field model .[ cols= " > , < , < , < " , ] in this section , we state some basic properties about the naming game that will be useful in the subsequent sections .a common aspect of all our proofs is to think of the process as being constructed graphically from independent poisson processes that indicate the time of the interactions , a popular idea in the field of interacting particle systems due to harris . in the case of the naming game, additional collections of uniform random variables must be introduced to also indicate the outcome of each interaction .more precisely , for every edge , we let * be the arrival times of a rate one poisson process , and * be independent uniform random variables over .collections of random variables attached to different edges are also independent .the process is then constructed as follows : at time , the states at and are simultaneously updated according to the transitions in the left column of table [ tab : coupling ] .since interactions involving both words can each result in two different outcomes depending on whether word or word is pronounced , the random variable is used to account for the probability of each outcome as indicated by the conditions in the middle column of the table where note that is the probability that word is pronounced in a conversation involving a bilingual individual and a type individual .one can easily check that the conditions in the table indeed produce the desired transition probabilities in . based on this graphical representation ,processes with different parameters or starting from different initial configurations can be coupled to prove important monotonicity results .the next lemma shows for instance a certain monotonicity of the naming game with respect to its initial configuration , which can be viewed as the analog of attractiveness for spin systems .this result will be useful in the proof of theorem [ th:1d ] . [ lem : attractive ] let and be two copies of the naming game . then , for all provided this holds for all .the result follows from a coupling of the two processes that we construct conjointly from the same graphical representation .that is , we assume that and that both processes are constructed from the same poisson processes and the same collections of uniform random variables .the construction given by harris , which relies on arguments from percolation theory , implies that , for any small enough time interval , there exists a partition of the vertex set into almost surely finite connected components such that any two vertices in two different components do not influence each other in the time interval .since the number of interactions in each component in the time interval is almost surely finite , the result can be proved for each of these finite space - time regions by induction .assume that for some arrival time . to prove that the previous relationship between both processes is preserved at time , we observe that the interaction between the individuals at and can result in ten different transitions depending on the state of both individuals .these transitions are listed in the left column of table [ tab : coupling ] and can be divided into two types : * the transitions that create an or remove a , which are labeled 2a5a , * the transitions that create a or remove an , which are labeled 2b5b . as previously mentioned , except for transitions 1a and 6b , every other pair of states for the neighbors can result in two possible transitions depending on whether word or word is pronounced during the conversation .the last column of the table indicates that for all possible simultaneous updates of both processes , the ordering between both processes is preserved at time , i.e. , to prove , as indicated in the last column , that a transition 2b in the first process indeed excludes the transitions 3a and 4a in the second process , we observe that which gives the implication and proves the exclusion of type 3a and 4a transitions .similarly , showing that the transitions 3b and 4b in the first process exclude transition 5a in the second process .as previously mentioned , the lemma follows from the fact that all possible simultaneous updates of both processes given in the last column preserve the desired ordering .this section is devoted to the proofs of theorem [ th : finite - graphs ] and theorem [ th : complete - graphs ] about the naming game on finite connected graphs .the key to proving the first theorem is to show that a certain process that depends on the difference between the number of individuals using word and the number of individuals using word is a supermartingale with respect to the natural filtration of the naming game , which allows to directly deduce the theorem from the optimal stopping theorem . to prove the second theorem which specializes in the process on complete graphs ,the idea is to observe that , as long as bilingual individuals do not interact with each other , there is no type individual in the population and the number of bilingual individuals evolves like a subcritical birth and death process that goes extinct quickly . throughout this section , and respectively the number of individuals of type and type at time , and we let for all . to motivate our proof of the first theorem and explain the assumption, we observe that the transitions labeled 2a5a in table [ tab : coupling ] , which are the transitions that increase the number of individuals using or decrease the number of individuals using , all occur with probability at least one half if and only if .as shown in the next lemma , this property can be used to construct a certain supermartingale with respect to the natural filtration of the process : the -algebra generated by the realization of the naming game until time . [ lem : supermartingale ] assume that .then , for all , using the transition probabilities in table [ tab : coupling ] , we get re - arranging the terms with respect to the type of edges , this becomes first , we observe that and , for all , similarly , and , for all , we have finally , and , for all , we have from which we also deduce that , for all , plugging into , we conclude that showing that is a supermartingale for .+ + applying the optimal stopping theorem to gives the following result . [ lem : stopping ] for all , we have first , we introduce the stopping time where denotes the number of vertices . since the naming game on any finite graph converges almost surely to the configuration in which all individuals are monolingual of the same type , the stopping time is almost surely finite .using in addition that the process is a supermartingale according to lemma [ lem : supermartingale ] , we deduce from the optimal stopping theorem that for all .in particular , which completes the proof of the lemma .+ + theorem [ th : finite - graphs ] directly follows from lemma [ lem : stopping ] by observing that the probability in the statement of the lemma is precisely the probability in the statement of the theorem .we now focus on the naming game on the complete graph .note that in this case the number of edges of each type can be expressed as a function of the number of individuals of each type , therefore is now a continuous - time markov chain .as previously mentioned , to prove that tends to zero as the number of vertices goes to infinity , the idea is to observe that , as long as bilingual individuals do not interact with each other , there is no type individual in the population and the number of bilingual individuals evolves like a subcritical birth and death process . to make the argument precise , we introduce the birth and death process starting with a single individual and with birth rate and death rate , i.e. , we start with the following preliminary result about the number of jumps before extinction of subcritical birth and death processes . [ lem : birth - death ] fix and , and let . then , , we note that , since , from which it follows that moreover , using that the number of paths of length not crossing 0 is bounded by the total number of paths of length together with stirling s formula , we get for all large . in particular , for all large since .+ + the reason for introducing the birth and death process above is that the number of bilingual individuals evolves precisely according to this process until two bilingual individuals interact with each other , an event that we call a collision .in particular , it can be deduced from the previous lemma that the probability that a collision ever happens is small when is large , which is also a bound for the probability that word outcompetes word . to prove this result , we let be the time of the first collision . [ lem : collision ] fix and .then , to begin with , we observe that , before the time of the first collision , there is no monolingual individual of type in the population . in particular , using the expression of the transition probabilities in the second column of table [ tab : coupling ] , and introducing we obtain that , before the first collision , whereas for all other values of and .this implies that , before the first collision , the number of bilingual individuals has evolved according to the birth and death process in which individuals independently give birth at rate and die at rate .in particular , the naming game can be coupled with the birth and death process in such a way that where denotes the number of bilingual individuals at time .the rest of the proof relies on the fact that the probability that the number of jumps in the birth and death process is large and the probability that there is a collision in the naming game coupled with the birth and death process when the number of jumps is small are both small when the graph is large .indeed , lemma [ lem : birth - death ] gives the existence of fixed from now on such that moreover , when , the maximum number of individuals can not exceed in the birth and death process therefore , thinking again of the number of bilingual individuals as being coupled with the birth and death process before the first collision , at each jump , the probability of a collision is bounded by .the integer being fixed , this implies that for all sufficiently large .the lemma simply follows by observing that the probability to be estimated is bounded by the sum of the probabilities in and .+ + theorem [ th : complete - graphs ] directly follows from the next lemma . [ lem : extinction ] fix and .then , since there is no type individual before the first collision , for all sufficiently large according to lemma [ lem : collision ] .this section is devoted to the proof of theorem [ th:1d ] .the first and main step of the proof is to show almost sure invasion of word for the naming game with the main difficulty to prove this result is that , even in the presence of nearest neighbor interactions , the evolution rules in can create infinitely many interfaces , i.e. , the state space of the process seen from the rightmost type individual that has only type to her left is infinite .motivated by numerical simulations of the process that suggest that the size of the interface is somewhat small most of the time , we prove the result for the process that has where but otherwise evolves according to the evolution rules .that is , the process starts from the configuration described in and evolves according to the evolution rules of the naming game except that , each time the configuration violates condition , the state at vertex instantaneously flips to a type . in view of this new rule , lemma [ lem : attractive ] implies that for all and , from which it follows that whenever moreover , one easily checks that the modified process only has three possible interfaces corresponding to the following three types of configurations : indeed , only the transitions and and for the configuration types are allowed starting from a type 0 or a type 1 configuration .moreover , from a type 2 configuration , either a monolingual and a bilingual individuals interact , which results in a type 1 configuration or a configuration with three bilingual individuals which instantaneously flips to a type 2 configuration , or both bilingual individuals interact , which results in a type 0 configuration .the main reason for introducing this modified process is its mathematical tractability due to the small size of the state space of the process seen from the interface .as previously mentioned , this is further motivated by the fact that numerical simulations suggest that the naming game itself , when starting from configuration , is most of the time in type 0 , 1 or 2 configurations , so the analysis of the modified process allows to obtain a bound somewhat close to one . to establish theorem [ th:1d ], we now prove that , under the conditions of the theorem , holds .this is done by first computing the occupation time of the modified process in each configuration type and then computing the value of the drift for the process in each configuration type . to shorten the notations as in the proof of lemma [ lem : supermartingale ] , we will again use the probabilities and defined in . [ lem : config ] the limits exist and satisfy let if the configuration at time is of type .looking at all the possible updates of the modified naming game and the corresponding transition rates in figure [ fig : drift ] , one easily checks that the configuration type evolves according to the markov chain with transitions note that the rates on the two dashed arrows of figure [ fig : drift ] are irrelevant .these transition rates imply that is irreducible therefore the limits exist and satisfy the following two equations : using also that and according to gives which is precisely .+ + to prove , the next step is to compute the value of the conditional drift of the process given the configuration type , i.e. , looking again at all the possible updates , one easily finds the last step is to combine and to prove that from which the almost sure convergence of the interface to infinity follows .the most natural approach is to express and for as a function of , from which it can be deduced that the first inequality in holds for larger than the largest real root of a certain polynomial with degree six .this root is not obvious to compute .instead , we observe that , when both fitnesses are close to each other , is close to one and the rate close to .the next two lemmas show that the left - hand side of is larger than its counterpart obtained by computing under the assumption , which allows to express more simply as the largest root of a polynomial with degree two .interestingly , a series of evaluations of the polynomial with degree six around indicates that the largest real root of this polynomial only differ from by less than , which shows _ a posteriori _ the advantage of our approach . [ lem : rate ] for all , we have . recalling and using , we get noticing that and differentiating with respect to , we deduce that which completes the proof . [ lem : sign - drift ] for all , we have using the relationship among and given in , we obtain to find a lower bound for the sign above , we introduce the function and observe that , for all , using in addition that according to lemma [ lem : rate ] gives this completes the proof . [ lem : positive - drift ] the right - hand side of is positive whenever first of all , note that using in addition gives since , we deduce that which is positive whenever as defined in .+ + from lemma [ lem : positive - drift ] , it directly follows that the process converges almost surely to infinity , which also implies convergence of the naming game starting from configuration to the configuration in which all individuals are type monolingual .moreover , we have to deduce that word can invade word , we let and be two independent copies of the process and use a standard coupling argument to conclude that , under the assumptions of the theorem , the probability that the naming game starting with the origin in state and all the other vertices in state converges to the `` all '' configuration is given by since there is a positive probability for the process starting with a single bilingual individual at the origin that the origin is of type at time one , this completes the proof of theorem [ th:1d ] .this section is devoted to proving theorem [ th : lattice ] , which relies on a block construction . to spare the reader complicated notations , we only prove the result in but our approach easily extends to higher dimensionsthe idea of the block construction is to couple a certain collection of good events related to the process properly rescaled in space and time with the set of open sites of oriented site percolation on the oriented graph with vertex set and in which there is an oriented edge see the left - hand side of figure [ fig : graphs ] for a picture in . to rescale the process and define the collection of good eventslater in the proof of lemma [ lem : block ] , we let and introduce the collection of space - time blocks in words , space is partitioned into squares and time into intervals of length , while the collection of space - time blocks in defines a partition of the space - time universe .the key to proving invasion of word is to show that the set of sites that we call -sites for short , dominates stochastically the set of wet sites in an oriented site percolation process whose parameter can be made arbitrarily close to one by choosing the parameter sufficiently large .more precisely , we have the following lemma . 1 . between time and time , there are at least two good interactions along each of the eight edges labeled 1 on the left - hand side , 2 . between time and time , there is no bad interaction along any of the sixteen edges labeled 2 on the left - hand side , 3 . between time and time , there is at least one good and no bad interaction along each of the eight edges labeled 3 on the right - hand side , 4 . between time and time , there is no bad interaction along any of the sixteen edges labeled 4 on the right - hand side .from and the probabilities in table [ tab : coupling ] , it follows that an interaction involving at least one individual using word can only result in one of the transitions 1a5a in the table .in particular , whenever site is an -site and our good event 14 occurs , the following holds : * at time , all twelve vertices marked with a black dot on the right - hand side of the figure are of type and * between and , all sixteen vertices in the figure are of type . in particular , letting be the event that is an -site , we deduce that now , let and be the number of good and bad interactions that occur along one given edge in a given time interval of length . since interactions occur along each edge of the lattice at rate one and are independently good with probability in particular , for all , the probability of the good event 14 is for all large enough .finally , we observe that the good event is measurable with respect to the graphical representation in the space - time region \times [ 0 , 4 t ) \ } \ \subset \ { \mathbb{z}}^2 \times [ 0 , \infty).\ ] ] this , together with the inclusion and the lower bound are exactly the comparison assumptions of theorem 4.3 in , from which the lemma directly follows .+ + it is known from standard results based on the so - called contour argument that , for small enough , there exists with positive probability an infinite cluster of wet sites in the two dependent oriented site percolation process on starting with one open site at level 0 and in which sites at the other levels are open with probability .this , together with lemma [ lem : block ] , implies that , for the naming game starting with a single bilingual individual , this proves survival of word but not extinction of word with positive probability .in fact , a weaker form of survival can be proved in the more general case when by simply using techniques similar to the ones in the proof of lemma [ lem : supermartingale ] to show that the number of individuals using word is a submartingale .however , extinction of word with positive probability can not be deduced from this approach .in contrast , our coupling with oriented site percolation combined with an idea of the author that extends a result of durrett can be used to complete the proof of the theorem .this is done in the next lemma . throughout the proof ,we think of the naming game as being coupled with oriented site percolation as in the statement of lemma [ lem : block ] . to begin with ,we follow by introducing the new oriented graph with the same vertex set as but in which there is an oriented edge see the right - hand side of figure [ fig : graphs ] for a picture in .we say that a site in the percolation process is dry if it is not wet .also , for , we write and say that there is a dry path connecting both sites if there is a sequence such that the following two conditions hold : note that a dry path in is also a dry path in but the reciprocal is false . now, the proofs of lemmas 411 in durrett imply the following : there exists small such that , for the percolation process on with parameter starting with open and all the other sites closed at level zero , conditioned on the event that percolation occurs , we have for some . in words , if the density of open sites is close enough to one , there is a linearly expanding region in which ( even closed ) sites can not be reached from a path of dry sites starting at level zero .this applies to dry paths in the graph but as pointed out in , the proofs of lemmas 411 in durrett easily extend to give for dry paths in . to conclude the proof , the last step is to show the connection between dry paths and -sites .assume that since word can not appear spontaneously , this implies the existence of which in turn implies that note however that this does not imply the existence of a dry path in which is the reason why we introduced a new graph with additional edges . taking the probability of the event in and the probability of the sub - event in directly gives where is the unique site such that .since can be made arbitrarily small by choosing large , the analog of for oriented dry paths in the graph together with the inequality implies that , conditioned on the event that percolation occurs , for the naming game conditioned on the event that is an -site . since the probability that percolation occurs is positive for small and since there is a positive probability for the process starting with a single bilingual individual at the origin that all sites in the spatial box are of type at time one , the lemma and theorem [ th : lattice ] follow .
|
this article studies a biased version of the naming game in which players located on a connected graph interact through successive conversations to bootstrap a common name for a given object . initially , all the players use the same word except for one bilingual individual who also uses word . both words are attributed a fitness , which measures how often players speak depending on the words they use and how often each word is pronounced by bilingual individuals . the limiting behavior depends on a single parameter : = the ratio of the fitness of word to the fitness of word . the main objective is to determine whether word can invade the system and become the new linguistic convention . in the mean - field approximation , invasion of word is successful if and only if , a result that we also prove for the process on complete graphs relying on the optimal stopping theorem for supermartingales and random walk estimates . in contrast , for the process on the one - dimensional lattice , word can invade the system whenever indicating that the probability of invasion and the critical value for strongly depend on the degree of the graph . the system on regular lattices in higher dimensions is also studied by comparing the process with percolation models .
|
although the euclidean distance is a simple and convenient metric , it is often not an accurate representation of the underlying shape of the data .such a representation is crucial in many real - world applications , such as object classification , text document retrieval and face verification , and methods that learn a distance metric from training data have hence been widely studied in recent years .we present a new angle on the metric learning problem based on random forests as the underlying distance representation .the emphasis of our work is the capability to incorporate the absolute position of point pairs in the input space without requiring a separate metric per instance or exemplar . in doing so , our method , called random forest distance ( rfd ) ,is able to adapt to the underlying shape of the data by varying the metric based on the _ position _ of sample pairs in the feature space while maintaining the efficiency of a single metric . in some sense, our method achieves a middle - ground between the two main classes of existing methods single , global distance functions and multi - metric sets of distance functions overcoming the limitations of both ( see figure [ fig : introcomparison ] for an illustrative example ) .we next elaborate upon these comparisons .[ tb ] the metric learning literature has been dominated by methods that learn a global mahalanobis metric , with representative methods . in brief , given a set of pairwise constraints ( either by sampling from label data , or collecting side information in the semi - supervised case ) , indicating pairs of points that should or should not be grouped ( i.e. , have small or large distance , respectively ) , the goal is to find the appropriate linear transformation of the data to best satisfy these constraints .one such method minimizes the distance between positively - linked points subject to the constraint that negatively - linked points are separated , but requires solving a computationaly expensive semidefinite programming problem .relevant component analysis ( rca ) learns a linear mahalanobis transformation to satisfy a set of positive constraints .discriminant component analysis ( dca ) extends rca by exploring negative constraints .itml minimizes the logdet divergence under positive and negative linear constraints , and lmnn learns a distance metric through the maximum margin framework . formulate metric learning as a quadratic semidefinite programming problem with local neighborhood constraints and linear time complexity in the original feature space .more recently , researchers have begun developing fast algorithms that can work in an online manner , such as pola , mlcl and lego .these global methods learn a single mahalanobis metric using the relative position of point pairs : + .although the resulting single metric is efficient , it is limited in its capacity to capture the shape of complex data .in contrast , a second class , called multi - metric methods , distributes distance metrics throughout the input space ; in the limit , they estimate a distance metric per instance or exemplar , e.g. , for the case of mahalanobis metrics . extend by propagating metrics learned on training exemplars to learn a matrix for each unlabeled point as well .however , these point - based multi - metric methods all suffer from high time and space complexity due to the need to learn and store by metric matrices . a more efficient approach to this second classis to divide the data into subsets and learn a metric for each subset .however , these methods have strong assumptions in generating these subsets ; for example , learns at most one metric per category , forfeiting the possibility that different samples within a category may require different metrics .we propose a metric learning method that is able to achieve both the efficiency of the global methods and specificity of the multi - metric methods .our method , the random forest distance ( rfd ) , transforms the metric learning problem into a binary classification problem and uses random forests as the underlying representation . in this general form, we are able to incorporate the position of samples implicitly into the metric and yet maintain a single and efficient global metric . to that end , we use a novel point - pair mapping function that encodes both the position of the points relative to each other and their absolute position within the feature space .our experimental analyses demonstrate the importance of incorporating position information into the metric ( section [ sec : experiments ] ) .we use the random forest as the underlying representation for several reasons .first , the output of the random forest algorithm is a simple `` yes '' or `` no '' vote from each tree in the forest . in our case , `` no '' votes correspond to positively constrained training data , and `` yes '' votes correspond to negatively constrained training data .the number of yes votes , then , is effectively a distance function , representing the relative resemblance of a point pair to pairs that are known to be dissimilar versus pairs that are known to be similar .second , random forests are efficient and scale well , and have been shown to be one of the most powerful and scalable supervised methods for handling high - dimensional data in contrast to instance - specific multi - metric methods , the storage requirement of our method is independent of the size of the input data set .our experimental results indicate rfd is at least 16 times faster than the state of the art multi - metric method .third , because random forests are non - parametric , they make minimal assumptions about the shape and patterning of the data , affording a flexible model that is inherently nonlinear . in the next section ,we describe the new rfd method in more detail , followed by a thorough comparison to the state of the art in section [ sec : experiments ] .our random forest - based approach is inspired by several other recent advances in metric learning that reformulate the metric learning problem into a classification problem . however ,where these approaches restricted the form of the learned distance function to a mahalanobis matrix , thus precluding the use of position information , we adopt a more general formulation of the classification problem that removes this restriction .given the instance set , each is a vector of features .taking a geometric interpretation of each , we consider the _ position _ of sample in the space .the value of this interpretation will become clear throughout the paper as the learned metric will implicitly vary over , which allows it to adapt the learned metric based on local structure in a manner similar to the instance - specific multi - metric methods , e.g. , .denote two pairwise constraint sets : a must - link constraint set and are similar and a do - not - link constraint set and are dissimilar . for any constraint ,denote as the ideal distance between and . if , then the distance , otherwise .therefore , we seek a function from an appropriate function space : where is some loss function that will be specified by the specific classifier chosen . in our random forests case , we minimize expected loss , as in many classification problems .so consider to be a binary classifier for the classes and .for flexibility , we redefine the problem as , where is some classification model , and is a mapping function that maps the pair to a feature vector that will serve as input for the classifier function . to train ,we transform each constraint pair using the mapping function and submit the resulting set of vectors and labels as training data .we next describe the feature mapping function .in actuality , all metric learning methods implicitly employ a mapping function .however , mahalanobis based methods are restricted in terms of what features their metric solution can encode .these methods all learn a ( positive semidefinite ) metric matrix , and a distance function of the form , which can be reformulated as ^{\sf t}\vec[({\mathbf{x}}_i-{\mathbf{x}}_j)({\mathbf{x}}_i-{\mathbf{x}}_j)^{\sf t}] ] denotes vectorization or _ flattening _ of a matrix .mahalanobis - based methods can thus be viewed as using the mapping function $ ] .this function encodes only relative position information , and the mahalanobis formulation allows the use of no other features .however , our formulation affords a more general mapping function : which considers both the relative location of the samples as well as their absolute position .the output feature vector is the concatenation of these two and in .the relative location represents the same information as the mahalanobis mapping function .note , we take the absolute value in to enforce symmetry in the learned metric . the primary difference between our mapping function and that of previous methods isthus the information contained in mean of the two point vectors .it localizes each mapped pair to a region of the space , which allows our method to adapt to heterogeneous distributions of data .it is for this reason that we consider our learned metric to be implicitly position - dependent .note the earlier methods that learn position - based metrics , i.e. the methods that learn a metric per instance such as , incorporate absolute position of each instance only , whereas we incorporate the absolute position of each instance pair , which adds additional modeling versatility .we note that alternate encodings of the position information are possible but have shortcomings .for example , we could choose to simply concatenate the position of the two points rather than average them , but this approach raises the issue of ordering the points . using would again yield a nonsymmetric feature , and an arbitrary ordering rule would not guarantee meaningful feature comparisons .the usefulness of position information varies depending on the data set . for data that is largely linear and homogenous, including will only add noise to the features , and could worsen the accuracy . in our experiments , we found that for many real data sets ( and particularly for more difficult data sets ) the inclusion of significantly improves the performance of the metric ( see section [ sec : experiments ] ) .[ data sets ] random forests are well studied in the machine learning literature and we do not describe them in any detail ; the interested reader is directed to . in brief , a random forest is a set of decision trees operating on a common feature space , in our case . to evaluate a point - pair , each tree independently classifies the sample ( based on the leaf node at which the point - pair arrives ) as similar or dissimilar ( 0 or 1 , respectively ) and the forest averages them , essentially regressing a distance measure on the point - pair : where is the classification output of tree .it has been found empirically that random forests scale well with increasing dimensionality , compared with other classification methods , and , as a decision tree - based method , they are inherently nonlinear . hence , our use of them in rfd as a regression algorithm allows for a more scalable and more flexible metric than is possible using mahalanobis methods .moreover , the incorporation of position information into this classification function ( as described in section [ mappingfun ] ) allows the metric to implicitly adapt to different regions over the feature space . in other words , when a decision tree in the random forest selects a node split based on a value of the absolute position sub - vector ( see eq .[ eq : map ] ) , then all evaluation in the sub - tree is _ localized _ to a specific half - space of .subsequent splits on elements of further refine the sub - space of emphasis . indeed , each path through a decision tree in the random forest is localized to a particular ( possibly overlapping ) sub - space .the rfd is not technically a _metric _ but rather a _pseudosemimetric_. although rfd can easily be shown to be non - negative and symmetric , it does not satisfy the triangle inequality ( i.e. , ) or the implication that , sometimes called identity of indiscernibles .it is straightforward to construct examples for both of these cases .although this point may appear problematic , it is not uncommon in the metric learning literature .for example , by necessity , no metric whose distance function varies across the feature space can guarantee the triangle inequality is satisfied . similarly can not satisfy the triangle inequality .our method _ must _ violate the triangle inequality in order to fulfill our original objective of producing a metric that incorporates position data .moreover , our extensive experimental results demonstrate the capability of rfd as a distance ( section [ sec : experiments ] ) .in this section , we present a set of experiments comparing our method to state of the art metric learning techniques on both a range of uci data sets ( table [ data sets ] ) and an image data set taken from the corel database .to substantiate our claim of computational efficiency , we also provide an analysis of running time efficiency relative to an existing position - dependent metric learning method . for the uci data sets , we compare performance at the -nearest neighbor classification task against both standard mahalanobis methods and point - based position - dependent methods . for the former ,we test -nn classification accuracy at a range of k - values ( as in figure [ fig : introcomparison ] ) , while the latter relies on results published by other methods authors , and thus uses a fixed . for the image data set , we measure accuracy at -nn retrieval , rather than -nn classification .we compare our results to several mahalanobis methods .the following is an overview of the primary experimental findings to be covered in the following sections . 1 .rfd has the best overall performance on ten uci data sets ranging from 4 to 649 dimensions against four state of the art and two baseline global mahalanobis - based methods ( figure [ fig : uci1 ] and table [ tbl : ucirank ] ) .rfd has comparable or superior accuracy to state of the art position - specific methods ( table [ tbl : isd ] ) .rfd is 16 to 85 times faster than the state of the art position - specific method ( table [ tbl : time ] ) .4 . rfd outperforms the state of the art in nine out of ten categories in the benchmark corel image retrieval problem ( figure [ fig : corel ] ) .we first compare our method to a set of state of the art mahalanobis metric learning methods : rca , dca , information - theoretic metric learning ( itml) and distance metric learning for large - margin nearest neighbor classification ( lmnn ) . for our method , we test using the full feature mapping including relative position data , , and absolute pairwise position data , , ( rfd ( ) ) as well as with only relative position data , , ( rfd ( ) ) . to provide a baseline , we also show results using both the euclidean distance and a heuristic mahalanobis metric , where the used is simply the covariance matrix for the data .all algorithm code was obtained from authors websites , for which we are indebted ( _ our code is available on http://www.cse.buffalo.edu/~jcorso_ ) .we test each algorithm on a number of standard small to medium scale uci data sets ( see table [ data sets ] ) .all algorithms are trained using 1000 positive and 1000 negative constraints per class , with the exceptions of rca , which used only the 1000 positive constraints and lmnn , which used the full label set to actively select a ( generally much larger ) set of constraints ; constraints are all selected randomly according to a uniform distribution . in each case , we set the number of trees used by our method to 400 ( _ see section [ forestsize ] for a discussion of the effect of varying forest sizes _ ) .testing is performed using 5-fold cross validation on the nearest - neighbor classification task . rather than selecting a single -value for this task , we test with varying , increasing in increments of 5 up to the maximum possible value for each data set ( i.e. the number of elements in the smallest class ) . by varying in this way , we are able to gain some insight into each method s ability to capture the global variation in a data set . when is small , most of the identified neighbors lie within a small local region surrounding the query point , enabling linear metrics to perform fairly well even on globally nonlinear data by taking advantage of local linearity .however , as increases , local linearity becomes less practical , and the quality of the metric s representation of the global structure of the data is exposed . though the accuracy results at higher values do not have strong implications for each method s efficacy for the specific task of -nn classification ( where an ideal value can just be selected by cross - validation ) , they do indicate overall metric performance , and are highly relevant to other tasks , such as retrieval .figure [ fig : uci1 ] show the accuracy plots for ten uci datasets .rfd is consistently near the top performers on these various data sets . in the lower dimension case ( iris ) , most methods perform well , and rfd without position information outperforms rfd with position information ( this is the sole data set in which this occurs ) , which we attribute to the limited data set size ( 150 samples ) and the position information acting as a distractor in this small and highly linear case . in all other cases ,the rfd with absolute position information significantly outperforms rfd without it .in many of the more more difficult cases ( diabetes , segmentation , sonar ) , rfd with position information significantly outperforms the field .this result is suggestive that rfd can scale well with increasing dimensionality , which is consistent with the findings from the literature that random forests are one of the most robust classification methods for high - dimensional data .table [ tbl : ucirank ] provides a summary statistic of the methods by computing the mean - rank ( lower better ) over the ten data sets at varying -values . for all butone value of , rfd with absolute position information has the best mean rank of all the methods ( and for the off - case , it is ranked a close second ) .rfd without absolute position information performs comparatively poorer , underscoring the utility of the absolute position information . in summary ,the results in table [ tbl : ucirank ] show that rfd is consistently able to outperform the state of the art in global metric learning methods on various benchmark problems . [cols="<,^,^,^,^,^,^,^,^",options="header " , ] [ tbl : time ] we compare our method to three multi - metric methods that incorporate absolute position ( via instance - specific metrics ) : fsm , fssm and isd .fsm learns an instance - specific distance for each labeled example .fssm is an extension of fsm that enforces global consistency and comparability among the different instance - specific metrics .isd first learns instance - specific distance metrics for each labeled data point , then uses metric propagation to generate instance - specific metrics for unlabeled points as well .we again use the ten uci data sets , but under the same conditions used by these methods authors .accuracy is measured on the -nn task ( =11 ) with three - fold cross validation .the parameters of the compared methods are set as suggested in .our rfd method chooses 1% of the available positive constraints and 1% of the available negative constraints , and constructs a random forest with 1000 trees .we report the average result of ten different runs on each data set , with random partitions of training / testing data generated each time ( see table [ tbl : isd ] ) .these results show that our rfd method yields performance better than or comparable to state of the art explicitly multi - metric learning methods .additionally , because we only learn one distance function and random forests are an inherently efficient technique , our method offers significantly better computational efficiency than these instance - specific approaches ( see table [ tbl : time])between 16 to 85 times faster than isd . the comparable level of accuracy is not surprising .while our method is a single metric in form , in practice its implicit position - dependence allows it to act like a multi - metric system .notably , because our method learns using the position of each point - pair rather than each point , it can potentially encode up to implicit position - specific metrics , rather than the learned by existing position - dependent methods , which learn a single metric per instance / position .rfd is a stronger way to learn a position - dependent metric , because even explicit multi - metric methods will fail over global distances in cases where a single ( mahalanobis ) metric can not capture the relationship between its associated point and every other point in the data .we also evaluate our method s performance on the challenging image retrieval task because this task differs from -nn classification by emphasizing the accuracy of individual pairwise distances rather than broad patterns . for this task, we use an image data set taken from the corel image database .we select ten image categories of varying types ( cats , roses , mountains , etc.the classes and images are similar to those used by hoi et al . to validate dca ) , each with a clear semantic meaning .each class contains 100 images , for a total of 1000 images in the data set .for each image , we extract a 36-dimensional low - level feature vector comprising color , shape and texture . for color ,we extract mean , variance and skewness in each hsv color channel , and thus obtain 9 color features .for shape , we employ a canny edge detector and construct an 18-dimensional edge direction histogram for the image . for texture , we apply discrete wavelet transformation ( dwt ) to graylevel versions of original rgb images .a daubechies-4 wavelet filter is applied to perform 3-level decomposition , and mean , variance and mode of each of the 3 levels are extracted as a 9-dimensional texture feature .we compare three state of the art algorithms and a euclidean distance baseline : itml , dca , and our rfd method ( with absolute position information ) . for itml ,we vary the parameter from to and choose the best ( ) . for each method, we generate 1% of the available positive constraints and 1% of the available negative constraints ( as proposed in ) . for rfd , we construct a random forest with 1500 trees . using five - fold cross validation, we retrieve the 20 nearest neighbors of each image under each metric .accuracy is determined by counting the fraction of the retrieved images that are the same class as the image that retrieved them .we repeat this experiment 10 times with differing random folds and report the average results in figure [ fig : corel ] .rfd clearly outperforms the other methods tested , achieving the best accuracy on all but the cougar category .also note that itml performs roughly on par with or worse than the baseline on 7 classes , and dca on 5 , while rfd fails only on 1 , indicating again that rfd provides a better global distance measure than current state of the art approaches , and is less likely to sacrifice performance in one region in order to gain it in another .in this paper , we have proposed a new angle to the metric learning problem .our method , called random forest distance ( rfd ) , incorporates both conventional relative position of point pairs as well as absolute position of point pairs into the learned metric , and hence implicitly adapts the metric through the feature space .our evaluation has demonstrated the capability of rfd , which has best overall performance in terms of accuracy and speed on a variety of benchmarks .there are immediate directions of inquiry that have been paved with this paper .first , rfd further demonstrates the capability of classification methods underpinning metric learning .similar feature mapping functions and other underlying forms for the distance function need to be investigated .second , the utility of absolute pairwise position is clear from our work , which is a good indication of the need for multiple metrics .open questions remain about other representations of the position as well as the use of position in other metric forms , even the classic mahalanobis metric .third , there are connections between random forests and nearest - neighbor methods , which may explain the good performance we have observed .we have not explored them in any detail in this paper and plan to in the future .finally , we are also investigating the use of rfd on larger - scale , more diverse data sets like the new mit sun image classification data set .we are grateful for the support in part provided through the following grants : nsf career iis-0845282 , aro yip w911nf-11 - 1 - 0090 , darpa mind s eye w911nf-10 - 2 - 0062 , darpa cssg d11ap00245 , and nps n00244 - 11 - 1 - 0022 .findings are those of the authors and do not reflect the views of the funding agencies .
|
metric learning makes it plausible to learn distances for complex distributions of data from labeled data . however , to date , most metric learning methods are based on a single mahalanobis metric , which can not handle heterogeneous data well . those that learn multiple metrics throughout the space have demonstrated superior accuracy , but at the cost of computational efficiency . here , we take a new angle to the metric learning problem and learn a single metric that is able to implicitly adapt its distance function throughout the feature space . this metric adaptation is accomplished by using a random forest - based classifier to underpin the distance function and incorporate both absolute pairwise position and standard relative position into the representation . we have implemented and tested our method against state of the art global and multi - metric methods on a variety of data sets . overall , the proposed method outperforms both types of methods in terms of accuracy ( consistently ranked first ) and is an order of magnitude faster than state of the art multi - metric methods ( 16 faster in the worst case ) .
|
uninterrupted series of successes of quantum mechanics support a belief that quantum formalism applies to all of physical reality .thus , in particular , the objective classical world of everyday experience should emerge naturally from the formalism .this has been a long - standing problem , in fact already present from the very dawn of quantum mechanics ( see e.g. the writings of bohr and heisenberg for some of the earlier discussions and e.g. for some of the modern approaches , relevant to the present work ) .perhaps the most promising approach is decoherence theory ( see e.g. ) , based on a system - environment paradigm : a quantum system is considered not in an isolation , but rather interacting with its environment .it recovers , under certain conditions , a classical - like behavior of the system alone in some preferred frame , singled out by the interaction and called a _pointer basis _ , and explains it through information leakage from the system to the environment ( the system is monitored by its environment ). however , as zurek noticed recently , decoherence theory is silent on how comes that in the classical realm information is _redundant_same record can exist in a large number of copies and can be independently accessed by many observers and many times .to overcome the problem , he has introduced a more realistic model of environment , composed of a number of independent fractions , and argued using several models ( see e.g. refs . ) that after the decoherence has taken place , each of these fractions carries a nearly complete classical information about the system .then zurek argues that this huge information redundancy implies objective existence .this model , called quantum darwinism , although very attractive ( see ref . for some experimental evidence ) , has a certain gap which make its foundations not very clear . postponing the details to section [ qdcond ] , the criterion used in quantum darwinism to show the information redundancyis motivated by _ entirely classical _ reasoning and a priori may not work as intended in the quantum world .there is however another basic question : is there a fundamental physical process , consistent with the laws of quantum mechanics , which leads to the appearance in the environment of multiple copies of a state of the system ?in other words , how does nature create a bridge from fragile quantum states , which can not be cloned , to robust classical objectivity ?zurek is aware of the difficulty when he writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ quantum darwinism leads to appearance in the environment of multiple copies of the state of the system .however the no - cloning theorem prohibits copying of unknown quantum states . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , he does not provide a clear answer to the question : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ quick answer is that cloning refers to ( unknown ) quantum states .so , copying of observables evades the theorem .nevertheless , the tension between the prohibition on cloning and the need for copying is revealing : it leads to breaking of unitary symmetry implied by the superposition principle , [ ... ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but the no - cloning theorem prohibits only _ uncorrelated _ copies of the state of the system , whereas it leaves open a possibility of producing _ correlated _ ones .this is the essence of _ state broadcasting_a process aimed at proliferating a given state through correlated copies . in this workwe identify a weaker form of state broadcasting_spectrum broadcasting _ , introduced in ref . , as the fundamental physical process , consistent with quantum mechanical laws , which leads to the perceived objectivity of classical information , and as a result recover quantum darwinism ( as a limiting point ) .we do it first in full generality , using a definition of objective existence due to zurek and bohr s notion of non - disturbance .then , in one of the emblematic examples of decoherence theory and quantum darwinism : a small dielectric sphere illuminated by photons ( see e.g. refs .the recognition of the underlying spectrum broadcasting mechanism has been possible due to a paradigmatic shift in the core object of the analysis . from a partial state of the system ( decoherence theory ) or information - theoretical quantities like mutual information ( quantum darwinism ) to a full quantum state of the system and the observed environment .this also opens a possibility for direct experimental tests using e.g. quantum state tomography .what does it mean that something _objectively exists _? what does it mean for information ?for the purpose of this study we employ the definition from ref . : [ obj ] a state of the system exists objectively if [ ... ] many observers can find out the state of independently , and without perturbing it . in what follows we will try to make this definition as precise as possible and investigate its consequences .the natural setting for this is quantum darwinism : the quantum system of interest interacts with multiple environments ( denoted collectively as ) , also modeled as quantum systems .the environments ( or their collections ) are monitored by independent observers ( _ environmental observers _ ) and here we do not assume symmetric environments they can be all different .the system - environment interaction is such that it leads to a full decoherence : there exists a time scale , called _ decoherence time _, such that asymptotically for interaction times : i ) there emerges a unique , stable in time preferred basis , so called _ pointer basis _, in the system s hilbert space ; ii ) the reduced state of the system becomes stable and diagonal in the preferred basis : [ decoh ] _ s_e_s : e_i p_i | i i , where s are some probabilities and by we will always denote asymptotic equality in the deep decoherence limit . we emphasize that we assume here the _ full decoherence _, so that the system decoheres in a basis rather than in higher - dimensional pointer superselection sectors ( decoherence - free subspaces ) .coming back to the definition [ obj ] , we first add an important _ stability requirement _ : the observers can find out the state of without perturbing it _ repeatedly and arbitrary many times_. in our view , this captures well the intuitive feeling of objectivity as something stable in time rather than fluctuating .thus , if definition [ obj ] is to be non - empty , it should be understood in the time - asymptotic and hence decoherence regime , which in turn implies that the state of which can possibly exist objectively , is determined by the decohered state ( [ decoh ] ) .we will show it on a concrete example we study later .next , we specify the observers . apart from the environmental ones , we also allow for a , possibly only hypothetical , _ direct observer _ , who can measure directly .we feel such a observer is needed as a reference , to verify that the findings of the environmental observers are the same as if one had a direct access to the system .it is clear that what the observers can determine are the eigenvalues of the decohered state ( [ decoh])they otherwise know the pointer basis , as if not , they would not know what the information they get is about . hence , the state in definition [ obj ] , which gains the objective existence , is the classical part of the decohered state ( [ decoh ] ) , i.e. its spectrum . the word find out we interpret as the observers performing von neumann ( as more informative than generalized ) measurements on their subsystems .by the independence condition , they act independently , i.e. there can be no correlations between the measurements and the corresponding projectors must be fully product : [ prod ] ^m_s_i^m_1_j_1^m_n_j_n , where all s are mutually orthogonal hermitian projectors , for .now the crucial word perturbation needs to be made precise .the debate about its meaning has been actually at the very heart of quantum mechanics from its beginnings , starting from the famous work of einstein , podolsky and rosen ( epr ) and the response of bohr .it is quite intriguing that this debate appears in the context of objectivity .the exact definitions of the epr and bohr notions of non - disturbance are still a subject of some debate and we adopt here their formalizations from ref . : the sufficient condition for the epr non - disturbance is the _ no - signaling principle _ , stating that the partial state of one subsystem is insensitive to measurements performed on the other subsystem ( after forgetting the results ) . quantum mechanics obeys the no - signaling principle , but bohr argued that the epr s notion is too permissive , as it only prohibits mechanical disturbance , and proposed a stricter one , which can be formally stated that the whole _state must stay invariant under local measurements on one subsystem ( after forgetting the results ) .for the purpose of this study we adopt bohr s point of view , adapted to our particular situation we assume that _ neither of the observers bohr - disturbs the rest _ ( in the direction it is our formalization of the definition [ obj ] , while in the it follows from the repetitivity requirement ) . together with the product structure ( [ prod ] ) , this implies that on each there exists a _non - disturbing measurement _ , which leaves the whole asymptotic state of the system and the observed environment invariant ( we will specify the size of the observed environment later ) . for the system it is obviously defined by the projectors on the pointer basis , as by assumption this is the only basis preserved by the dynamics . for the environments we allow for a general higher - rank projectors , , and not necessarily spanning the whole space , as the environments can : i ) have inner degrees of freedom not correlating to and ii ) correlate to only through some subspaces of their hilbert spaces ( we will later encounter such a situation in the concrete example ) . when more than one observer preform the non - disturbing measurements , a further specification of bohr - nondisturbance is needed .allowing for general correlations ] is the entropy of the decohered state ( [ decoh ] ) .condition ( [ zurek ] ) has been shown to hold in several models , including environments comprised of photons and spins ( see e.g. ref . ) . for finite times , the equality ( [ zurek ] ) is not strict and holds within some error , which defines the _ redundancy _ as the inverse of the smallest fraction of the environment , for which =[1-\delta(t)]h_s ] is irrelevant for our analysis , it is enough that it scales with the total photon number .[ division ] ] after all the photons have scattered , the asymptotic ( in the sense of the scattering theory ) out-state , is given from eqs .( [ u],[init],[init_mac ] ) by & & _ s : e(t)= + & & _i=1,2x_i |^s_0 x_i| x_i x_i |_m[se1 ] + & & + _ ijx_i |^s_0 x_j| x_i x_j |_m[se2 ] where [ rho_i ] _ i^mac(t)(*s*_i^ph_0 * s*_i^)^mn_t , i=1,2 . by the argument of section [ tw ] ,in order to have a chance to observe the broadcasting state ( [ br2 ] ) , we trace out some of the environment . in the current modelit is important that the forgotten fraction must be _ macroscopic _ : we assume that , out of all macro - fractions of eq .( [ init_mac ] ) are observed , while the rest , , is traced out .the resulting partial state reads ( cf .( [ se1],[se2 ] ) ) : [ sfe ] & & _ s : fe(t)=_i=1,2x_i |^s_0 x_i| x_i x_i & & + _ ijx_i |^s_0 x_j(_i^ph_0 * s*_j^)^(1-f)n_t| x_i x_j | + & & ( * s*_i^ph_0 * s*_j^)^fn_t .[ i ne j ] we finally demonstrate that in the soft scattering sector ( [ soft ] ) , the above state is asymptotically of the broadcast form ( [ br2 ] ) by showing that in the deep decoherence regime two effects take place : 1 .the coherent part given by eq .( [ i ne j ] ) vanishes in the trace norm : [ znikaogon ] 2 . the post - scattering macroscopic states ( cf .( [ rho_i ] ) ) become perfectly distinguishable : [ nonoverlap ] _1^mac(t)_2^mac(t)0 , or equivalently using the generalized overlap : & & b + & & 0,[nonoverlap_norm ] despite of the individual ( microsopic ) states becoming equal in the thermodynamic limit .the first mechanism above is the usual decoherence of by suppression of coherences in the preferred basis .some form of quantum correlations may still survive it , since the resulting state ( [ i = j ] ) is generally of a classical - quantum ( cq ) form .those relict forms of quantum correlations are damped by the second mechanism : the asymptotic perfect distinguishability ( [ nonoverlap ] ) of the post - scattering macro - states .thus , the state becomes of the spectrum broadcast form ( [ br2 ] ) for the distribution : [ pi ] p_i = x_i |^s_0 x_i , which by implications ( [ impl ] ) gains objective existence in the sense of definition [ obj ] .for greater transparency , we first demonstrate the mechanisms ( [ znikaogon],[nonoverlap ] ) , and hence a formation of the broadcast state ( [ br2 ] ) , in a case of pure initial environments : [ pure ] _ ph^0| k_0 k_0 same momenta , , satisfying ( [ soft ] ) . to show ( [ znikaogon ] ) , observe that , defined by eq .( [ i ne j ] ) , is of a simple form in the basis : [ matrix ] _ s : fe^ij(t)= , where and .since s are unitary and , , we obtain : & & ||_s : fe^ij(t)||_= + & & ||(*s*_1^ph_0 * s*_1^)^fn_t+||(*s*_2^ph_0 * s*_2^)^fn_t + & & = 2|x_1 |^s_0 x_2||_1^ph_0 * s*_2^|^(1-f)n_t[|i ne j| ]the decoherence factor for the pure case ( [ pure ] ) has been extensively studied before ( see . e.g. refs .let us briefly recall the main results . under the condition ( [ soft ] ) and using the classical cross section of a dielectric sphere in the dipole approximation , one obtains in the box normalization : [ psipsi ] & & k_0|*s*_2^_1k_0=1+i + & & - ( 3 + 11 ^ 2)+o , where is the angle between the incoming direction and the displacement vector and ^{1/3} ] .thus , if the observed portion of the environment is _ microscopic _ , the asymptotic post - scattering state is in fact a product one : & & _ s : e(0)=^s_0(_0^mac)^ _ s : e()= + & & _ i=1,2p_i| x_i x_i | ( * s*_i| k_0 k_0 |*s*_i^)^= + & & ( _ i=1,2p_i| x_i x_i | ) |^mic ^mic |^,[product ] where because of eq .( [ micro ] ) ( and denotes equality in the thermodynamic limit ( [ thermod ] ) ) .we call it a product phase ,in which =0 ] .the plot shows two phase transitions : the first one occurs at from the product phase of eq .( [ product ] ) to the broadcasting phase of eq .( [ b - state ] ) .the second one is from the broadcasting phase to the full information phase at , when the observed environment is quantumly correlated with the system . due to the thermodynamic limit each value of the fraction be understood modulo a microscopic fraction , i.e. a fraction not scaling with the total photon number ( cf .( [ nt ] ) ) . [ phase ] ]the quantity experiencing discontinuous jumps is the mutual information between the system and the observed environment , and the parameter which drives the phase transitions is the fraction size .as discussed above , each value of has to be understood modulo a micro - fraction .the appearance of the phase diagram is a reflection of both the thermodynamic and the deep decoherence limits and its form is in agreement with the previously obtained results ( see e.g. refs . ) .we now move to a more general case when the environmental photons are initially in a mixed state . unlike in the previous studies( see e.g. refs . ) , we will not assume the thermal blackbody distribution of the photon energies , but consider a general state , diagonal in the momentum basis and concentrated around the energy sector ( [ soft ] ) : [ mu ] ^ph_0=_k p(k ) | k k eigenstates are discrete box states and the summation is over the box modes .the partial post - scattering state is given by the same eqs .( [ rho_i]-[i ne j ] ) with the above .the first step ( [ znikaogon ] ) , i.e. the decay of the coherent part , is the same as before , as nowhere in eqs .( [ matrix]-[|i ne j| ] ) the purity was used , but the decoherence factor is now modified . in the leading order in reads : & & |_1^ph_0 * s*_2^|^(1-f)n_t + & & ^(1-f)n_t[bareta ] + & & , [ decay_ogon_mix ] where the modified decoherence time is given by : [ tau_d_mix ] ^-1x^2 c a^6 k^6 ( 3 + 11 ^ 2_k ) , and denotes the averaging with respect to . completing the second step ( [ nonoverlap_norm ] ) is more involved ( our calculation is partially similar to that of ref .we first calculate the bhattacharyya coefficient for the individual states .let : ^mic_2_1(_k , k |)*s*_1^,[m0 ] where : m_k k _kp(k ) k| * s*_1^_2 kk| * s*_2^_1 k . +[ m ] by eq .( [ mu ] ) it is supported in the sector ( [ soft ] ) , and we diagonalize it in the leading order in . for that , we first decompose matrix elements in and keep the leading terms only .let us write : _ 1^_2=*1 * -(*1 * -*s*_1^_2)-b .matrix elements of between vectors satisfying ( [ soft ] ) are of the order of at most . indeed , by eq .( [ psipsi ] ) the diagonal elements .the off - diagonal elements are , in turn , determined by the unitarity of and the order of the diagonal ones : for any fixed satisfying ( [ soft ] ) ( there is a single sum here ) , where we again used eq .( [ psipsi ] ) . hence : [ offdiag ] _ k _ kk|b_kk|^2=_kk|k|*s*_1^_2k|^2 = o ( ) . as a byproduct , by the above estimates in the energy sector ( [ soft ] ) , in the strong operator topology : for any from the subspace defined by ( [ soft ] ) . coming back to , from eqs .( [ psipsi],[offdiag ] ) in the leading order : m_k k&=&p(k)^2_kk-p(k)^3/2b^*_kk + & -&p(k)^3/2b_kk+ o().[m ] the first term is non - negative and is of the order of unity , while the rest is of the order and forms a hermitian matrix .we can thus calculate the desired eigenvalues of using standard , stationary perturbation theory of quantum mechanics ( see e.g. ref . ) , treating the terms with the matrix as a small perturbation . assuming a generic non - degenerate situation ( the measure in eq .( [ mu ] ) is injective ) , we obtain : [ m ] m(k)=p(k)^2(1-b^*_kk - b_kk)+o ( ) , and : & & = m + & & _kp(k)_kp(k)(1-b_kk ) + & & = 1+_k(-p(k))= + _k|k|*s*_1^_2k|^2 + & & + _ k_kk|k|*s*_1^_2k|^21-,[rhorho ] where we have used eqs . ( [ m0],[m],[psipsi],[m ] ) in the respective order , and introduced : & & |(1-_kp(k)|k|*s*_1^_2k|^2 ) ( c)^-1[eta ] + & & |_k_kkp(k)|k|*s*_1^_2k|^2 [ eta ] ( in eq . ( [ eta ] )we have used eqs .( [ bareta],[tau_d_mix ] ) ) .this implies for the micro - states : b(^mic_1,^mic_2 ) = 1 - 1 , [ ort_micmix ] since are of the order of unity in by eqs .( [ offdiag],[eta ] ) .thus , under ( [ soft ] ) , the states become equal .this is the mixed stated analog of eq .( [ micro ] ) , employing the generalized overlap .passing to the macro - states ( cf .( [ rho_i ] ) ) , we in turn obtain : & & b= ( ) ^mn_t + & & ( 1-)^mn_t , [ ort_mix ] where : [ alpha ] and we have used eq .( [ eta ] ) .thus , whenever , the macroscopic states satisfy \approx 0 ] .those different time scales were already discovered and discussed in ref . , where was called the environment receptivity and the redundancy rate . however , the presented physical interpretations of those quantities were rather heuristic , based loosely on the quantum darwinism condition ( [ zurek ] ) and not grounded in the full state analysis , as we have presented above . moreover , the measure studied in ref . was of a special , product form : , where is the thermal distribution of the energies and the photons were assumed to come from a portion of the celestial sphere of an angular measure . above, we have shown the effect for a general , diagonal in the momentum eigenbasis state ( [ mu ] ) .let us recall after refs . that for an isotropic illumination when ( all the directions are equally probable ) , and there is no broadcasting of the classical information : perfectly mixed directional states of the photons can not store any localization information of the sphere , neither on the micro- nor at the macro - level ( cf .( [ ort_micmix],[ort_mix ] ) ) . by eqs .( [ decay_ogon],[ortogonalizacja ] ) and eqs .( [ decay_ogon_mix],[ort_mix ] ) , the asymptotic formation of the spectrum broadcast states relies , among the other things , on the full product form of the initial state ( [ init ] ) and the interaction ( [ u ] ) in each block .however , from the same equations it is clear that one can allow for correlated / entangled fractions of photons , as long as they stay microscopic , i.e. do not scale with .the corresponding terms then factor out in front of the exponentials in eqs .( [ decay_ogon],[ortogonalizacja],[decay_ogon_mix],[ort_mix ] ) and the formation of the spectrum broadcast states is not affected .we finish with a surprising application of the classical perron - frobenius theorem , leading to `` singular points '' of decoherence .let the initial state of the sphere be .then , in the spectrum broadcast states ( [ b - state],[b - state_mix ] ) there appears a ( unitary-)stochastic matrix ( cf .( [ pi ] ) ) . by the perron - frobenius theorem it possesses at least one stable probability distribution : and such a distribution exists for _ any _ initial eigenbasis of .let us now choose it as the spectrum of the initial state : .then , the scattering process ( [ u ] ) not only leaves this distribution unchanged , but broadcasts it into the environment : & & ( _ 0^mac)^fm _ s : fe()= + & & = _i(_jp_ij()_*j())| x_i x_i | ( _ i^mac)^fm + & & = _i_*i()| x_i x_i | ( _ i^mac)^fm .the initial spectrum does not decoherethat is why we have called it a singular point of decoherence .this perron - frobenius broadcasting process , first introduced in ref . , can thus be used to faithfully ( in the asymptotic limit above ) broadcast the classical message through the environment macro - fractions .in this work we have identified spectrum broadcasting of ref . , a significantly weaker form of quantum state broadcasting , as the fundamental quantum process , which leads to objectively existing classical information . more specifically , adopting the multiple environments paradigm , the suitable definition of objectivity ( definition [ obj ] ) , and bohr s notion of non - disturbance ,we have proven that the only possible process which makes transition from quantum state information to the classical objectivity is spectrum broadcasting .this process constitutes a formal framework and a physical foundation for the quantum darwinism model , which , as we have pointed out , in its information - theoretical form does not produce a sufficient condition for objectivity , since it allows for entanglement .we have shown that in the presence of decoherence , spectrum broadcasting is a necessary and sufficient condition for the objective existence of a classical state of the system .it filters a quantum state and then broadcasts its spectrum i.e. a classical probability distribution , in multiple copies into the environment , making it accessible to the observers . in the picture of quantum channels ,this redundant classical information transfer from the system to the environments is described by a cc - type channel .we have illustrated spectrum broadcasting process on the emblematic example for decoherence theory : a small dielectric sphere embedded in a photonic environment .in particular , we have explicitly shown the asymptotic formation of a spectrum broadcasting state for both pure and general ( not necessarily thermal ) mixed photon environments .then , we have derived in the asymptotic limit of deep decoherence the information - theoretical phase diagram of the model .depending on the observed macroscopic fraction of the environment , it shows three phases : the product , broadcasting and full information phase , and is a complete agreement ( up to some error for finite times ) with the classical plateau of the original quantum darwinism studies .there are two phase transitions taking place : i ) from the product phase to the broadcasting phase ( at ; ii ) from the broadcasting phase ( ) to the full information phase ( at ) , when the observed environment becomes quantumly correlated with the system .in addition , we have pointed out that a special form spectrum broadcasting the perron - frobenius broadcasting , can be used to faithfully ( in the asymptotic limit ) broadcast certain classical message through the noisy environment fractions . from an experimental point of view, our work opens a possibility to develop an experimentally friendly framework for testing quantum darwinism .our central object , the broadcast state ( [ br2 ] ) , is in principle directly observable through e.g. quantum state tomography a well developed , successful , and widely used technique .in contrast , the original quantum darwinism condition ( [ zurek ] ) relies on the quantum mutual information and it is not clear how to measure it .we finish with a series of general remarks and questions .first , there is a straightforward generalization of the illuminated sphere model to a situation where classical correlations are spectrum broadcasted .consider several spheres , each with its own photonic environment , and separated by distances much larger than the photon wavelengths , ( cf .( [ soft ] ) ) .the effective interaction is then a product of the unitaries ( [ u ] ) , e.g. : u_s_1s_2:e_1e_2(t)_i , j=1,2| x_i x_i || y_j y_j | _ i^n_t_j^n_t , for two spheres , where are the spheres positions and are the corresponding scattering matrices , and the asymptotic spectrum broadcast state carries now the joint probability , e.g. ( cf . eq ( [ pi ] ) ) .it is measurable by observers , who have an access to photon macro - fractions , originating from all the spheres .second , in the example we have studied , and in the majority of decoherence models , the system - environment interaction hamiltonian is of a product form : [ hint ] h_int = g a_s _ k=1^n x_e_k , where is a coupling constant and are some observables on the system and the environments respectively .the pointer basis appears then trivially as the eigenbasis of is arguably put by hand by the choice of .it is then an interesting question if there are more general interaction hamiltonians , without a priori chosen pointer basis , which nevertheless lead to an asymptotic formation of a spectrum broadcast state : [ br3 ] _ s : fe(t)_ii i |_k _ i^e_k , ^e_k_i^e_k_ii=0 .are there _ truly dynamical _ mechanisms leading to stable pointer bases and objective classical states ? viewing eq .( [ br3 ] ) form a different angle , we note that spectrum broadcasting defines a split of information contained in the quantum state into classical and quantum parts .as it is well known , every quantum state can be convexly decomposed in many ways into mixtures of pure states , so a priori such a split does not exist .some additional process is needed .spectrum broadcasting is an example of it : by correlating to the preferred basis , it endows the corresponding probabilities with objective existence , in the sense of definition [ obj ] , and defines them as a `` classical part '' of , leaving the states as a `` quantum part '' ( cf .no - local - broadcasting theorem of ref . ) .third , there appears to be a deep connection between the non - signaling principle and objective existence in the sense of definition [ obj ] : the core fact that it is at all possible for observers to determine _ independently _ the classical state of the system is guaranteed by the non - signaling principle : .there is no contradiction with the bohr - nondisturbance , as the latter is a strictly _ stronger _ condition than the non - signaling (this is the core of bohr s reply to epr ) .in fact , the above connection reaches deeper than quantum mechanics . in a general theory , where it is possible to speak of probabilities of obtaining results when performing measurements ( however defined ) , whatever the definition of objective existence may be , the requirement of the _ independent _ ability to locally determine probabilities by each party seem indispensable .this is guaranteed in the non - signaling theories , where all s have well defined marginals . in this sensenon - signaling seems a _ prerequisite of cognition_. this connection will be the subject of a further research .finally , one may speculate on a relevance of our results for life processes . already in 1961 ,wigner tried to argue that the standard quantum formalism does not allow for the self - replication of biological systems .it seemed to be confirmed by the famous no cloning theorem .however , now we see that cloning is not the only possibility .as we have shown , spectrum broadcasting implies a redundant replication of classical information in the environment .this is indispensable for the existence of life : one of the most fundamental processes of life is watson - crick alkali encoding of genetic information into the dna molecule and self - replication of the dna information. it can not be thus a priori excluded that spectrum broadcasting may indeed open a classical window for life processes within quantum mechanics .this research is supported by erc advanced grant qolaps and national science centre project maestro dec-2011/02/a/ st2/00305 .we thank m. piani for discussions on strong independence .p.h . and r.h .acknowledge discussions with k. horodecki , m. horodecki , and k. yczkowski .here we present an independent derivation of the quantum darwinism condition ( [ zurek ] ) for the illuminated sphere model from section [ sphere ] ( cf .( [ qd ] ) ) .although illustrated on a concrete model , our derivation is indeed more general : instead of a direct , asymptotic calculation of the mutual information ] is the fixed time maximal mutual information , extractable through generalized measurements on the ensemble , and the conditional probabilities read : [ pie ] ^e_j|i(t)(here and below labels the states , while the measurement outcomes ) .we now relate to the generalized overlap ] in terms of the speed of i ) decoherence ( [ znikaogon ] ) and ii ) distinguishability ( [ nonoverlap_norm ] ) : & & |h_s - i|h+ 2h+ + & & [ gen1 ] + & & 4_fe(l , t)2 + 2b^fm,[gen2 ] where , , ] to : & & _ l|h_s - i| h(|c_12|e^- ) + & & + 2h(2|c_12|e^-t ) + 8|c_12|e^-t2 + & & + 2e^-t .this finishes the derivation of the quantum darwinism condition ( [ qd ] ) .we note that the result ( [ gen1],[gen2 ] ) is in fact a general statement , valid in any model where : i ) the system is effectively a qubit ; ii ) the system - environment interaction is of a environment - symmetric controlled - unitary type : let a two - dimensional quantum system interact with identical environments , each described by a finite - dimensional hilbert space , through a controlled - unitary interaction : u(t)_i=1,2iiu_i(t)^n .let the initial state be and .then for any and big enough : & & |h(\{p_i})-i|h+ 2h+ + & & [ gen1 ] + & & 4_fe(t)2 + 2b^fn,[gen3 ] where : & & p_ii|_0^s|i,_i(t)u_i(t)_0^eu_i(t)^ , + & &_ e(t)||_s(t)-^i = j_s||_tr , + & & _ fe(t)||_s : fe(t)-^i = j_s : fe(t)||_tr .n. bohr , discussions with einstein on epistemological problems in atomic physics in p. a. schilpp ( ed . ) , _ albert einstein : philosopher - scientist _ , library of living philosophers , evanston , illinois ( 1949 ) ; n. bohr , collected works in j. kalckar ( ed . ) , _ foundations of quantum mechanics i ( 1926 - 1932 ) _ vol . 6 , north - holland , amsterdam ( 1985 ) .e. joos , h. d. zeh , c. kiefer , d. giulini , j. kupsch , and i .- o .stamatescu , _ decoherence and the appearancs of a classical world in quantum theory _, springer , berlin ( 2003 ) ; w. h. zurek , rev .phys . * 75 * , 715 ( 2003 ) ; m. schlosshauer , rev .phys . * 76 * , 1267 ( 2004 ) ; m. schlosshauer , _ decoherence and the quantum - to - classical transition _ , springer , berlin ( 2007 ) .h. d. zeh , found .* 1 * , 69 ( 1970 ) ; h. d. zeh , found .phys . * 3 * , 109 ( 1973 ) ; w. h. zurek , phys .d * 24 * , 1516 ( 1981 ) ; zurek , phys .d * 26 * , 1862 ( 1982 ) ; w. h. zurek , phys . today * 44 * , 36 ( 1991 ) ; h. d. zeh , roots and fruits of decoherence , in b. duplantier , j .-raimond , v. rivasseau ( eds . ) , _ quantum decoherence _ ,birkhuser , basel ( 2006 ) .r. brunner , r. akis , d. k. ferry , f. kuchar , and r. meisels , phys .* 101 * , 024102 ( 2008 ) ; a. m. burke , r. akis , t. e. day , g. speyer , d. k. ferry , and b. r. bennett , phys .lett . * 104 * , 176801 ( 2010 ) .let us show eq .( [ agree ] ) more formally , considering for simplicity only two observers .if one of them measures first and gets a result , then the joint conditional state becomes , and the subsequent measurement by the second observer will yield results with conditional probabilities .if for some , for , then comparing their results after a series of measurements at some later moment , the observers will be confused as to what exactly the state the system was : with the probability the second observer will obtain different states , while the first observer measured the same state .one would not the observers findings objective , unless for every there exists only one such that ( actually , which follows from the normalization , so that the distributions are all deterministic ) . reversing the measurement order and applying the same reasoning , we obtain that for every there can exist only one such that , where by the bayes theorem , .these two conditions imply that the joint probability ( after an eventual renumbering ) . applying the above argument to all the pairs of indices ,one obtains eq .( [ agree ] ) .the fact that cq and qc states carry some form of non - classical correlations has been shown e.g. through the no - local - broadcasting theorem in ref . or through entanglement activation in m. piani , s. gharibian , g. adesso , j. calsamiglia , p. horodecki , and a. winter , phys .106 * , 220403 ( 2011 ) . to prove it, we calculate for . or more precisely , since we are working in the box normalization , the measure is , where is the number of the discrete box states with the fixed length . in the continuous limit approaches . as the scattering is by assumption elastic ,matrix elements are non - zero only for the equal lengths and hence : [ sectors ] * s*_1^_2_k u_k , u_k^u_k=*1*_k . decomposing the summations over into the sums over the lengths and the directions and using ( [ sectors ] ) , we obtain : & & _ k , kp(k)|k|*s*_1^_2k|^2= _ k _ n(k),n(k)|k|*s*_1^_2k|^2= + & & _ k ( p_k * s*_1^_2 p_k * s*_2*s*_1^ ) = _ k p(k)=1,[1 ] where is a projector onto the subs - space of a fixed length , and hence . comparing with eq . ( [ rhorho ] ) , eq .( [ 1 ] ) leads to , and hence by definition ( [ alpha ] ) to .e. p. wigner , the probability of the existence of a self - reproducing unit , in _ the logic of personal knowledge : essays presented to michael polany on his seventieth birthday _ , routledge & kegan paul , london ( 1961 ) .
|
quantum mechanics is one of the most successful theories , correctly predicting huge class of physical phenomena . ironically , in spite of all its successes , there is a notorious problem : how does nature create a bridge from fragile quanta to the robust , objective world of everyday experience ? it is now commonly accepted that the most promising approach is the decoherence theory , based on the system - environment paradigm . to explain the observed redundancy and objectivity of information in the classical realm , zurek proposed to divide the environment into independent fractions and argued that each of them carries a nearly complete classical information about the system . this quantum darwinism model has nevertheless some serious drawbacks : i ) the entropic information redundancy is motivated by a priori purely classical reasoning ; ii ) there is no answer to the basic question : what physical process makes the transition from quantum description to classical objectivity possible ? here we prove that the necessary and sufficient condition for objective existence of a state is the spectrum broadcasting process , which , in particular , implies quantum darwinism . we first show it in general , using multiple environments paradigm , a suitable definition of objectivity , and bohr s notion of non - disturbance , and then on the emblematic example for decoherence theory : a dielectric sphere illuminated by photons . we also apply perron - frobenius theorem to show a faithful , decoherence - free form of broadcasting . we suggest that the spectrum broadcasting might be one of the foundational properties of nature , which opens a window for life processes .
|
in this paper we want to understand some aspects of behaviour of additive predictive models for high - dimensional data sets . more specifically , we will deal with the problem of approximating a real - valued predictor function defined on a domain and depending on variables by additive functions in fewer ( say ) variables and of lower order of interaction , say , having the form where typically .our aim will be to obtain easily verifiable upper bounds on the -norm of the remainder , as well as to try and understand their dependence on the dimension of the domain .in doing so , we will move away from the condition of independence of the predictor variables , replacing it with a milder restriction of the probability distribution being equivalent to the product of its marginals .our approximation with additive functions is optimal ( with regard to the mean square error ) for independent variables , but not necessarily in the dependent case , where we obtain an upper error bound . at the same time, we are not yet ready to offer concrete algorithms for real very large datasets . in one of its forms , the phenomenon of concentration of measure says that every lipschitz function on a sufficiently high - dimensional domain is well - approximated by a _constant function _ , that is , an additive function of the lowest possible order of interaction .however , as one would expect , the limitations of this result are such as to render it inapplicable in our situation : a reasonably good approximation requires the intrinsic dimension of a dataset to be prohibitively high .the most natural question is therefore , can one achieve a better approximation in lower ( mid to high ) dimensions by merely allowing additive functions of a higher interaction order ?even here the answer turns out to be negative : there exist functions for which approximation by constants is the best possible among all additive functions in the orders up to .this result makes it clear that the only way to achieve a better approximation by additive functions is to impose further restrictions on the functions .our suggestion is to consider smooth functions and generalise the standard lipschitz condition by requiring the -norm of the vector of all mixed derivatives of order to be bounded above by a constant , independent of the dimension of the domain . under such restrictions and an additional condition of independence of predictor variables , we develop a technique for obtaining approximating additive functions of a prescribed order and derive upper error bounds in the -norm ( section [ best ] . )our results are illustrated by a series of examples in the last section [ examples ] , in particular showing that the asymptotic rate of convergence of the theoretically derived error is accurate .section [ quasi ] aims at relaxing the assumption of independence of predictor variables .recall that random variables are independent if the probability distribution , , can be written as the product of the marginal distributions , .we replace this with the assumption which we call _ quasi - independence _ and which calls for the distribution of to be equivalent to the product of its marginals .the radon - nikodym theorem then implies that the distribution is the product of with the radon nikodym derivative , and the additive approximation obtained using the product measure ( distribution ) serves at the same time as an approximation with regard to the ` true ' probability distribution , .the upper error bounds in the -norm includes the derivative .the research here is motivated by the observation that adaptive techniques like mars which estimate models of the form given in equation ( [ eq : anova ] ) will produce models with predominantly lower order interactions . in practice ,interactions with order higher than 5 are not used .models of the type defined in equation ( [ eq : anova ] ) have been called `` anova decomposition '' in as they generalise for real variables the models which are used in the analysis of variance ( anova ) .applications of the anova decomposition for the analysis of techniques for variance estimation can be found in and for the estimation of quadrature errors in and .the work here extends the previous work by providing estimates for the approximation errors of truncated anova decompositions . in earlier work by one of the authors itwas seen how the concentration of measure may be exploited to get highly effective numerical differentiation procedures .computational techniques for the determination of the anova decomposition can for example be found in .more generally , data mining is being developed for the analysis of large data sets which appeared in business and science due to the fact that both data acquisition and data storage have become inexpensive because of the availability of cheap transducers and data storage devices .typically , data mining applications lead to very large data sets of high dimension , and high - dimensional problems are intrinsically difficult as they are affected by the _ curse of dimensionality _ .both queries and the identification of predictive models are very time - consuming .at the same time , it turns out that the effects of high dimensions are not only bad and some may be successfully exploited to lead to highly effective algorithms ( cf .e.g. ) . in the ideal case ,high - dimensional data is just data which contains high amounts of information and these added amounts of information should intuitively lead to better algorithms .the first basic concept is that of an _ object _ .examples include shoppers of a retailer , insurance customers or variable stars .these objects have many properties , some of which are observable .the array of observed properties is the _ feature vector _ .we assume here that but arrays containing other types of components and even of mixed types occur as well . in order to distinguish objects we require some quantitative notion of difference or similarity between objects .it seems reasonable to assume that the euclidean distance between two feature vectors , given by provides information about the difference of the underlying objects .however , this leads straight to the first paradox of _ increasing distances _ : the typical distance between two objects grows as we add new features , that is , _ distance grows with the number of features _in other words , the more one knows about the objects , the more different they seem to appear and ultimately , the difference may become infinite . while the increased difference seems reasonable, the unboundedness of the distance is not , as intuitively two objects are only `` different to a certain point '' .fortunately , this paradoxical growth of distances may be easily cured by either normalising the euclidean distance or else by scaling the variables in such a way that , for example , the average distance between two feature vectors is .as an example , consider the euclidean cube ^n ] ( the _ characteristic size _ of the cube ) is , and thus a natural way to normalise the euclidean distance is while this paradox is seemingly simple , it is necessary to consider , and it is important that the dissimilarity is normalised in the way suggested so as to put things in the proper perspective . the predictor functions we are interested in will estimate the probability with which a certain statement about the object is true , and thus the range of is , typically , the unit interval ] .let and suppose we want to approximate by a constant , , in such a way that holds with probability .even for the value of ( which is nowhere good enough ) the minimal dimension required to achieve our goal is . for ,the minimal dimension is . to obtain the accuracy ,the dimension would suffice , but one can hardly expect a real dataset to have an _ intrinsic _ dimension of this sort , that is , to depend on five thousand independent parameters . finally , to ensure that which already seems to be a reasonably close approximation, one needs the dimension to be on the order of the astronomical ( and unrealistic ) .it is clear from the above that approximation by constants is not good enough in the medium to high dimensions which is the case we are mostly interested in .the next natural question is therefore : will the approximation error bounds based on the concentration phenomenon and given by formula ( [ conce ] ) improve automatically if one allows the approximation by additive functions of _ higher interaction order than zero _ ?it seems quite natural that by significantly relaxing the restrictions on the class of approximating functions one gets better approximation bounds .rather surprisingly , it is not the case , as there exist functions on -dimensional domains for which approximation by constants is the best possible among _ all _ additive functions with interaction orders of up to .( subsection [ zeroex ] . ) in view of the existence of such examples , it seems in a sense unavoidable that one should impose additional restrictions on the predictor functions to obtain better bounds on higher - order approximations with additive functions .we will now put forward such restrictions as we find most natural , in the hope that the reader is prepared to accept them as such .key to all approximation results are assumptions about the data set and the class of functions to be approximated .as we are interested in the asymptotic behaviour as the dimension becomes medium to large ( which means in practice larger than 10 ) , we actually are interested in a family of spaces , functions etc , parameterised by their dimension .we will assume that the feature vectors are distributed with a density which has a first moment and a variance which does not depend on the dimension .for some of the theorems it will be required that the components of are independent , that is to say the underlying data distribution satisfies however , for practical purposes this assumption is unrealistic , because in the context of data mining where one has many physical variables ( is large ) , one would expect that those variables could be highly correlated . in view of this , we will subsequently replace the condition of independence with a milder restriction for the product distribution to be _ in the same measure class _ as the product distribution , we will refer to such random variables as _ quasi - independent ._ the functions we consider are assumed to be lipschitz - continuous , i.e. , for some constant which is independent of the dimension . in the case of differentiable functions thiscorresponds to the bound this condition is very natural and invoked frequently , it assumes that function values have similar sizes for points which are close .`` smoother '' functions will be defined as functions satisfying the condition for some constant which is independent of the dimension .while such smoothness definitions do always have a certain degree of arbitrariness and are difficult to check in applications , one can see , first , that this condition generalises the lipschitz condition , and , second , that for the case of functions given as products this bound follows from lipschitz continuity for functions bounded away from zero as one can verify that in particular , if then . similar bounds for the components of the decomposition from equation ( [ eq : anova ] )are used in for the analysis of quadrature formulas based on a reproducing kernel hilbert space .we make here the small step to consider families of functions where the bounds on the derivatives are independent of the dimension in order to obtain an estimate of the approximation error behaviour as a function of dimension .so far we have assumed that the expected norm squared of the vectors does not grow with dimension .however , in many practical applications this is not the case .often , the features are normalised , say , so that they are all in ] and to scale the smoothness conditions .while these conditions are strong , they are not unusual , and allow the generalisation of the well - known limit theorems for of the average of i.i.d . random variables . a nontrivial example ,for which the conditions hold is given by a far - reaching but so - far implicit assumption is that all the variables contribute in the same way to .this is true for applications where the features correspond to equally important parts or elements and occur when is describing an aggregate of similar elements each characterised by one .examples include employees of a company and stars of a galaxy .an alternative situation is considered in by h. woniakowski and his collaborators . therethe variables are given weights depending on their importance .these weights then enter the smoothness assumptions .one of the consequences of that choice is that in many cases the negative effects of the concentration effect discussed below can be avoided . here , however , we consider a different situation of equally important variables and thus have to deal with the consequences of the concentration .the two examples of functions given above illustrate that functions like the mean which `` change '' with the dimension are very natural .in this section we formulate the main results of this paper . first the notation is established .let be a probability space , denote a family of random variables and be the usual conditional expectations . throughout this subsection, we make a standing assumption that the random variables are independent , i.e. , the density distribution is of the form the operator is defined as using the independence assumption on random variables , one gets a ` telescoping sum ' the terms of the sum can now be expanded in the same way as and repeated application of these expansions provides a theorem which looks very much like taylor s theorem : [ taylor ] let be an integrable function on .then for every natural , : the proof uses induction . first , the case is just equation ( [ eq : expansion1 ] ) .if the equality holds for then the first terms are the same as for and only the last term needs further expansion . in this termeach summand is a function of and from equation ( [ eq : expansion1 ] ) one obtains : replacing the last term in the equation for the case with the right - hand side of this equation leads to the equation for . a similar decomposition for the special case of has been proved in where the theorem is called _ decomposition lemma_.next we introduce the space of functions which are sums of functions only depending on variables each as : ( note that are closed , which follows from theorem [ projection ] below . )now we introduce the operator such that and the remainder operator with from theorem [ taylor ] one then gets .[ projection ] the operator is an orthogonal projection , and the statement that is a projection , i.e. , , is equivalent to showing that is zero for .this follows directly from the fact that any function for which the function values do not depend on the variable one has .as any consists of a sum of functions which depend only on variables and all the terms in contain there is at least one for which any particular term in the expansion of does not depend on and thus .if is a product distribution as assumed above , then for every projection measure and for any one has now as for each -tuple and for each -tuple one has at least one which is different from all the then this shows that the error term is orthogonal on which implies that is an orthogonal projection into and from this the minimisation characterisation follows .the orthogonality of all the components of the decomposition is shown in for the case of the uniform distribution .now we observe that and thus the decomposition as in theorem [ taylor ] terminates and so if all the variances of these terms exist , one has from the orthogonality of this decomposition error estimates are obtained for differentiable functions , one may also get bounds based on lipschitz constants . we introduce the ( marginal ) cumulative distribution function and the kernel where is the heaviside function , i.e. , for and for .using integration by parts one gets for differentiable : now let then the expected value squared is now let and then one gets by integration by parts and from the cauchy - schwarz inequality : if is lipschitz continuous with constant . for the case of interactions we first define the seminorm by and then obtain : let be defined as in then one has for the mean squared error bound [ error ] it is shown the same way as in an earlier theorem that all the terms of the sum defining are orthogonal and so for simplicity set and application of equation ( [ eq : repro ] ) and similar reasoning as for the case gives where now one can see that and from this the claimed bound follows .consider the case for a fixed which are independent of .for example , let the be uniformly distributed in the unit hypercube .then the constant is independent of .furthermore , in section [ basic ] it was suggested that the appropriate smoothness restriction on is from the previous theorem one can in this case conclude that for example , if is the uniform distribution on ] onehas . as a function to approximate we choose from this one gets thus one can choose the lipschitz constant to be and consequently the bound from the previous section is for practical error estimates this bound is slightly too pessimistic .the order of convergence in is accurate .this can be seen from the results of a simulation which are in figure [ fig : example1 ] where the average errors squared have been multiplied by in order to confirm the behavior .sometimes , functions are only dependent on a few of the variables . if the data is equally distributed over many variables such that they have a uniformly ( in the dimension ) bounded expected squared norm then the values of any component will concentrate around zero .this is illustrated in this example . herethe function considered is the data points are assumed to be i.i.d . normally distributed and in order to obtain the finite expected value of the variances of each component is .the error of the third and higher order interaction approximations is zero , for the lower order approximations see figure [ fig : example2err ] for the expected squared error .the theory again predicts asymptotic behaviour of the error of which is confirmed by the simulation result displayed in figure [ fig : example2 ] .the theory and the examples so far have illustrated the best possible approximation with additive and interaction models .as in the first example the function shall be approximated .we use 1000 data points uniformly distributed on ^n ] .it can be proved that the best additive approximation in the order of interaction is that by zero function .notice that in accordance with our philosophy the function has to be normalised by the factor of , in order to keep its lipschitz constant bounded by .the resulting function on the same cube assumed pretty small values : its maximum is just , and thus one may argue that the approximation by zero function is not bad at all .the next example is somewhat stronger .denote by a usual bell - shaped function supported on the interval ] , takes a positive value at , is monotone on each of the intervals ] , and satisfies for all natural .let us also assume that .for a vertex , , of the cube ^n ] where denotes the euclidean distance .a moment s thought shows that is a well - defined -function assuming values in the interval between and , in particular if is a vertex , then depending on the parity .again , one can show that the above function admits no better additive approximation in all orders of interaction up to inclusive than that by the zero function .notice that the normalisation of the function aimed at keeping the lipschitz constant of the order leads to the function whose maximal values reach .in our paper we have attempted to perform initial analysis of the problem of approximating a predictor function on a high - dimensional dataset with additive functions allowing for interactions of a lower order .we are interested in the specifics of medium to high dimensions .the proposed model makes what we believe to be reasonable assumptions , from the modeling viewpoint , on the function to be approximated ( the normalisation conditions and ` higher - order smoothness conditions ' ) .we argue that some conditions of this kind are to be imposed in order to obtain approximation results : we exhibit examples of lipschitz functions in variables for which the best additive function approximation of order of interaction is a constant . under the proposed conditions ,we derive from a taylor - type theorem upper bounds on the approximation errors .the results are illustrated on examples and compared to the results obtained using the mars software package .the examples confirm that the asymptotic order of our error bounds is right .the second named author ( v.p . ) is grateful to the computer sciences laboratory of the australian national university for hospitality extended between july and december 1999 .part of the research ( including the above visit ) was supported by the australian cooperative research centre for advanced computational systems ( acsys ) .partial support was also provided by the marsden fund of the royal society of new zealand , in particular towards a visit by the first named author ( m.h . ) to the victoria university of wellington in april 2002 .the authors acknowledge the constructive suggestions of the referee which helped to substantially improve the paper .h. woniakowski , _ efficiciency of quasi - monte carlo algorithms for high dimensions _ ,monte carlo and quasi - monte carlo methods 1998 ( h. niederreiter and j. spanier , eds . ) , springer - verlag , berlin , 1999 .
|
we discuss some aspects of approximating functions on high - dimensional data sets with additive functions or anova decompositions , that is , sums of functions depending on fewer variables each . it is seen that under appropriate smoothness conditions , the errors of the anova decompositions are of order for approximations using sums of functions of up to variables under some mild restrictions on the ( possibly dependent ) predictor variables . several simulated examples illustrate this behaviour .
|
among the several models of spontaneous wave function collapse which has been considered so far , the so called qmupl ( quantum mechanics with universal position localization ) model is particularly interesting , being it a very good compromise between mathematical simplicity and physical adequacy .it was first introduced by diosi and subsequently studied in , both from the mathematical as well as physical point of view .it is particularly relevant because it is the simplest model describing the evolution of the wave function of a system of distinguishable particles , subject to a spontaneous collapse in space ; as such , it can be analyzed in great mathematical detail .the model is defined by the following stochastic differential equation ( sde ) : \psi_{t}(\ { x \});\ ] ] where the symbol denotes the coordinates of the particles ( for simplicity , we will work in one spatial dimension ) .the operator is the standard quantum hamiltonian of the composite system ; is the position operator associated to the -th particle and denotes the quantum expectation value of .the stochastic processes are independent standard wiener processes defined on a probability space , and the parameters are positive coupling constants which is convenient to take proportional to the mass of the particle according to the formula : where is the mass of the -th particle , is a reference mass which , at the non relativistic level , is reasonable to take equal to the mass of a nucleon ( kg ) , while is the only true new parameter of the model , whose value sets the strength of the collapse mechanism .the numerical value of has to be chosen in such a way that : i ) the model reproduces quantum mechanical predictions for microscopic systems ; ii ) it rapidly induces the collapse of the wave function describing the center of mass of a macroscopic object . in the literature , two quite different values for been proposed , the first by ghirardi , rimini , and weber , and the second by adler : grw s choice is motivated by the requirement that superpositions of order nucleons , displaced by a distance of at least cm , be suppressed within sec .adler s choice instead is motivated by the requirement that the collapse occurs already at the level of latent image formation .more specifically , grw set sec , where is the collapse rate of the grw model ; adler instead set sec , where is the noise - strength coupling constant of the csl model ; the relation between of our model and and is : , with .. is manifestly not linear , which makes it difficult to analyze , in particular it makes it hard to find its solutions .the way to circumvent this obstacle is to linearize the equation , according to the following prescription . consider the linear sde : \phi_{t}(\ { x \}),\ ] ] where the stochastic processes are independent standard wiener processes with respect to a new measure .it can be shown that is a martingale , which can be used to generate a new measure from a given one : the measure introduced here above is chosen in such a way that is the radon - nikodyn derivative of with respect to , i.e. : .moreover , girsanov s theorem states that the two sets of wiener processes and are related as follows : . given these ingredients , it is easy to relate the solutions of eq . to those of eq . : given a solution of eq ., one first considers the normalized state , and then replaces the noises with according to the formula given above .it is not difficult to show that the wave function so obtained solves eq . .further details can be found in .the aim of this paper is to analyze the generalization of the qmupl model to non - markovian gaussian random processes . as discussed in , the generalization of the linear eq . tothe non - markovian case takes the form : \phi_{t}(\ { x \}),\ ] ] where the noises are now supposed to be _ gaussian random processes _ whose mean and correlation function , expressed with respect to the measure , are equal to : = 0 , \qquad\quad { \mathbb e}_{\mathbb q } [ w_{n}(t ) w_{m}(s ) ] = d_{nm}(t , s).\ ] ] the correlation function is assumed to be real , symmetric and positive - semidefinite .like in the white - noise case , eq . does not preserve the norm of the wave function .accordingly , we assume that the _ physical states _ are the normalized states : moreover , we assume that the _ physical probability _ measure is , which is related to the measure , by means of which the statistical properties of the noises have been defined , according to the formula : _ measurable quantities _ are given by expressions of the form : ] , which contrary to the standard quantum case , has both a real and an imaginary part , is : \ ; = \ ; \int^t_0 ds \left [ \frac{i m}{2\hbar}\ , q'^2(s ) + \sqrt{\lambda } q(s ) w(s ) -\lambda q(s ) \int_0^t dr\ , q(r ) d(s , r)\right].\ ] ] that eq. represents the correct action associated to eq .can be easily verified by checking that the wave function solves eq . .the advantage of the path integral representation of the green s function , with respect to the standard representation associated to a differential equation like , is that it avoids resorting to the functional derivative of the noise , which is a source of major complications .we now compute the path integration in . following the standard feynman polygonal approach , we divide the time interval ] is the discretized form of the ` action ' , which reads : = \sum_{k=1}^n\left[\frac{i}{\hbar } \frac{m}{2\epsilon}(q_k - q_{k-1})^2 + \epsilon\sqrt{\lambda } w_k q_k - \epsilon^2 \lambda q_k\sum_{j=1}^n d_{k , j}q_j\right],\ ] ] where and .the constraints are : and .we re - write , separating the terms which are constant with respect to the integration variables , the linear terms and the quadratic terms ; using a vector notation , we have : \hspace{2cm}\nonumber \\ & & \hspace{5cm}\cdot\int_{-\infty}^{+\infty}d\mathbf{x } \exp\left[-\mathbf{x } \cdot a \mathbf{x } + 2 \ , \mathbf{x } \cdot \mathbf{y}\right]\,,\end{aligned}\ ] ] where and are two -dimensional vectors defined as follows : we have also used the short hand notation : .the matrix is the sum of two symmetric , -dimensional square matrices and , whose entries are : the multiple gaussian integral of eq. can be immediately evaluated by using the standard result : = \sqrt{\frac{\pi^{n-1}}{\det(a ) } } \,\exp\left[\frac{a^2}{4 } \mathbf{y } \cdot a^{-1}\mathbf{y}\right];\ ] ] then becomes : .\ ] ] note that the integral in eq. exists only if is a positive definite matrix .the result can be extended to non - negative matrices , following e.g. the procedure of theorem 1 , page 13 of .accordingly , our results strictly hold only when the correlation function of the noise , seen as an integral kernel , is non - negative definite .the next step is to take the limit , in which case one encounters two main difficulties : the first is to evaluate the inverse matrix , the second is to compute the determinant of , in both cases for any . to solve these difficulties ,we proceed as in . in order to evaluate , we introduce a twice differentiable function ] and ] given by , satisfies the following equation : g(x , t;x_0,0)+\tilde{g}(x , t;x_0,0)\,,\ ] ] with : \int_0^t ds\ , q(s ) d(t , s ) e^{\mathcal{s}[q]}\,.\ ] ] the path integral in can not be trivially reduced to because appears , which depends on the entire time interval ] , where solves eq .and is the corresponding normalized solution ; is a generic self - adjoint operator .averages of this kind are particularly important , as they represent physical quantities , directly connected to experimental outcomes .the difficulty in computing such averages lies both in the difficulty in solving eq . and in the fact that depends non - trivially on the noise . in the white noise - case , a very helpful trick , known as the _ imaginary noise trick _ allows to simplify considerably the problem .let us consider the following class of stochastic differential equations : \psi_{t}^{\xi},\ ] ] where is a constant complex factor . by using standard it calculus one can show that ] where the density matrix is defined as : \equiv { \mathbb e}_{\mathbb q } [ | \phi_t \rangle \langle \phi_t |] ] is the standard action associated to the quantum hamiltonian .the propagator associated to can then be expressed as follows : \,,\ ] ] where denotes complex conjugation . substituting the definition for the wave function s propagator into , and exchanging the path - integration with the stochastic average , the stochastic terms average as follows : \right]= }\nonumber \\ & = & \exp\left[\frac{1}{2}\int_0^tds\int_0^tdr d(s , r ) \left(\xi^2\lambda q(s)q(r)+\xi^ { * 2}\lambda q'(s)q'(r)+ \nonumber \\ & & \end{aligned}\ ] ] the propagator then becomes : \int^{q'(t)=x'}_{q'(0)=x'_0 } \mathcal{d}[q']\exp\left[\int_0^tds \bigg(\frac{i}{\hbar } \mathcal{s}_{0}[q]-\frac{i}{\hbar } \mathcal{s}_{0}[q']\right.\nonumber\\ & & \qquad\qquad\quad\left.\left.-\frac{\lambda}{2}|\xi|^2\int_0^tdr d(s , r ) \left(q(s)-q'(s)\right)\left(q(r)-q'(r)\right)\right)\right]\,,\end{aligned}\ ] ] which depends only on .we can conclude that the evolution for the density matrix is independent from a phase change in the coupling with the noise .as anticipated , this phase change invariance provide a very handy tool to compute average values , such as the mean position and mean momentum . in place of eq ., let us consider the equation : \phi_t(x)\,,\ ] ] which belongs to the class , with .this is a linear , unitary , norm - preserving standard schrdinger equation for a free particle under the influence of a stochastic potential .the evolution of the stochastic average of the mean value of an observable is given by the following equation : =\frac{i } { 2m\hbar}{\mathbb e}_{\mathbb q } [ \langle [ p^2,o ] \rangle_t ] - i\sqrt{\lambda } { \mathbb e}_{\mathbb q } [ w(t)\langle\left[q , o\right ] \rangle_t]\,,\ ] ] where , as usual , and that the mean of is 0 , one finds : & = & \frac{1}{m}{\mathbb e}_{\mathbb q } \left[\langle p\rangle_t\right]\ , , \label{eq : yhdfgdfgq}\\ \frac{d}{dt}{\mathbb e}_{\mathbb q } \left[\langle p\rangle_t\right ] & = & 0\ , ; \label{eq : yhdfgdfgp}\end{aligned}\ ] ] as we see , we recover newton s equations for a free particle .in particular , also in the non - markovian case , like in the white - noise case , the momentum of an isolated system is conserved , in the average . in sec .[ sec : eight ] , where we analyze the time evolution of gaussian states , we will see that the fluctuations around the average are inversely proportional to the mass of the system .these two facts , together with the fact that the collapse scales with the size of the system , lead to the following result : _ the wave function of an isolated macro - object behaves , for all practical purposes , like a particle moving deterministically in space according to newton s laws ._ because of the many experimental implications , it is important to check how the mean free energy evolves in time .this is most easily computed by shifting to the heisenberg picture , in which case one finds : = \frac{\lambda\hbar^2}{m}\int_0^t ds d(t , s)\,;\ ] ] this is the expected generalization of the well - known white noise formula for the mean energy increase in collapse models . for a physically reasonable correlation function such as the exponential one of eq ., one has : \ ; = \;\langle h_0 \rangle_0+\frac{\lambda\hbar^2}{2m}\left(t+ \frac{e^{-\gamma t}-1}{\gamma}\right)\,:\ ] ] as we see , also in this case the mean energy increases linearly in time , without reaching a steady value .more generally , let us assume time translation invariance , and let us consider the spectral decomposition of a generic correlator : we have : { } \;\ ; \frac{\pi}{2 } \tilde{d}(0).\ ] ] the above formula shows that the energy production is nonzero also at large times , unless the correlator has a cutoff at low frequencies .an important lesson to learn from the above analysis is that non - markovian terms do not introduce thermalization effects in the evolution of a quantum system ; such effects can be introduced only by modifying the form of the operator coupled to the noise , as first discussed in .nevertheless , non - markovian terms are extremely important , as they affect the time evolution of physical quantities , thus the predictions of collapse models , at small times .a significative example has been first provided in .in this section we study the time evolution of gaussian wave functions , whose form is clearly preserved by the green s function ; for simplicity , we will assume time translation invariance , so that reduces to .a generic gaussian state of the form : \,.\ ] ] evolves in time to the following state : \,,\ ] ] where : and the functions have been defined in eqs .. we can draw some important conclusions about the time evolution of gaussian wave functions : * since neither nor depend on the noise , the time evolution of the spread of the wave function is deterministic , as in the white noise case .an initially spread - out gaussian wave function shrinks in space , reaching an asymptotic final spread . in the case of an exponential correlation function , we can provide the explicit expression for the asymptotic value for the spread ( see next subsection ) .* by making the substitution : one can easily prove that eq ., thus the propagator , does not depend on the mass of the particle .thus , one way to see the effect of the mass on the global dynamics is to take the evolution of the wave function for the reference mass , and then `` shrink '' the space coordinates by a factor .this leads to the _ amplification mechanism _ , which is the characteristic feature of collapse models :the bigger the system , the faster the wave function shrinks in space . *the center of the gaussian wave function evolves _ randomly _ in space , as expected .its average value has already been computed in , and evolves classically . moreover , due to the independence of the dynamics from under the substitution , one can conclude that the fluctuations of around the classical motion go like since the following equality for the variance \right]^2} ] also evolves randomly in space .its average value is constant in time ( see eq . ) , as expected from a free particle , while the fluctuations around the average increase like . if however we consider the fluctuations of the mean velocity , we have that they decrease like . thus also in this respect one recovers classical determinism at the macroscopic level .the above remarks show that the non - markovian qmupl model shares all the important features of the white noise model .more quantitative details can be given by taking a specific expression for the correlation function , as we will do in the next subsection .we study in some more detail the time evolution of the gaussian solution , when the correlation function is exponential as in .we focus our attention on the spread in position which is given by the inverse of twice the square root of the real part of , whose analytic expression is given explicitly by eqs .[ fig:1 ] and [ fig:2 ] show some typical behavior of for small times ( fig .[ fig:1 ] ) and large times ( fig . [ fig:2 ] ) , for different values of and for kg . as we see and as we expect , the larger the value of , the stronger the noise and the faster the collapse in space . for a 1 kg particle , for different values of , for the same initial condition and with m sec .the curve with corresponds to the white noise case .time is measured in seconds , distances in meters.,scaledwidth=50.0% ] for a 1 kg particle , for different values of , for the same initial condition and with m sec .the white noise case would appear in the graph as a straight line with value 1.27 m. time is measured in seconds , distances in meters.,scaledwidth=50.0% ] fig .[ fig:3 ] shows how the spread of a gaussian wave function associated to a particle having a mass kg ( which is the total mass of a system of nucleons ) , having initial spread m , decreases after sec , as a function of .the value of the coupling constant is the grw value .grw have chosen the value of the coupling constant in order to ensure that the wave function of a system of at least an avogadro s number of particle collapses , within a time interval of sec , below m. fig .[ fig:3 ] shows that also for relatively small values of , the non - markovian collapse models preserves this same feature of the white - noise model .[ fig:4 ] displays the same graph as that of fig .[ fig:3 ] , with the typical values chosen by adler . in adler sets the value of the csl - coupling constant equal to sec by noting that in the process of latent image formation , which has a characteristic time of about sec , a number of atoms approximately equal to 20 is involved .he then assumes that the collapse process ensures that the reduction occurs already at this stage , from which the value sec comes .the graph shows that even for relatively small values of ( adler and ramazanoglu have shown that choosing sec already changes significantly the predictions of non - markovian models with respect to white - noise models ) , an initially spread - out wave function shrinks rapidly below m , which corresponds to a well - localized wave packet .m decreases after sec , as a function of .the mass of the particle has been set equal to kg , corresponding to the total mass of a system containing an avogadro s number of nucleons .the coupling constant has been given the grw - value m sec .larger values of imply a faster collapse in space .time is measured in seconds , distances in meters.,scaledwidth=50.0% ] m decreases after sec , as a function of .the mass of the particle has been set equal to kg , which is the mass of a system of ( , ) nucleons , with and taken from eq .( 8) of .the coupling constant has been given adler s middle value sec .the dashed line corresponds to the white noise case .time is measured in seconds , distances in meters.,scaledwidth=50.0% ] for an exponential correlation function , we can write down the analytic expression of the asymptotic value of , which is : with defined in . from this , the asymptotic spread in position can be easily computed . in particular ,if we take the white noise limit , one obtains the finite value : which matches with the value known in the literature .we have thoroughly investigated the dynamics of a free quantum particle as described by the non - markovian qmupl model of spontaneous wave function collapse .we have provided an explicit formula for the green s function ; we have shown that it reproduces the well - known white noise case , and have analyzed the physically interesting case of an exponential correlation function .we have proven that the non - markovian model shares all the most important features of the corresponding white - noise model ; we have described in particular the evolution of gaussian wave functions . there are of course several other important issues which need to be investigated .in particular : * it is important to set on a rigorous mathematical ground the change of measure defined in , and to derive the analogous of girsanov s theorem which holds for the white - noise case .a sketch of the proof can be found in appendix b of ref . . *it is also important to set a bound on the spread of the general solution as a function of time , in order to see how it decays in time .such a formula would be crucial for setting a lower bound on the value of by guaranteeing that the model collapses sufficiently big systems in sufficiently short time .* in the case of an exponential correlation function , it would be interesting to prove if _ any _ initial state collapses asymptotically to a gaussian state whose spread in position and momentum is fixed and given by . in this paperwe have proved that this is true only for the special case of initially gaussian states .a similar general theorem has been recently proved for the white - noise case . these problems will be the subject of future investigation .the authors wish to thank s.l .adler , d. drr and g.c .ghirardi for many useful conversations .they also wish to thank p. pearle for showing them that the green s function can be equally well computed by using the standard operator formalism in place of the path integral formalism .they finally wish to thank a. fonda for his guidance through the proof of the existence and uniqueness theorem of appendix [ sec : app ] .in this appendix we prove that equation : with boundary conditions , , admits a unique solution .the same theorem applies also to eq . for .in order to simplify the proof , it is convenient to set both boundary conditions to zero .this can be done without loss of generality , as follows .let us define the new function : obviously , .moreover , solves the following equation : where it is then sufficient to prove the following theorem . 0.2 cm theorem .let be a real continuous function on \times [ 0,t ] ] .then the integro - differential equation with boundary conditions , admits a unique solution ] the set of functions ] .our problem reduces to showing existence and uniqueness of solutions for eq . .since the product of two compact operators is compact , then also is compact .we now use fredholm s alternative theorem ( theorem 8.7 - 2 , page 452 of ) , according to which in order to prove the theorem it suffices to show that the homogeneous equation associated to eq ., i.e. =0\,,\qquad z(0)=z(t)=0\,,\ ] ] admits only the trivial solution . to prove this , we separate the real and imaginary part of which we respectively denote with the superscripts , obtaining with boundary conditions .multiplying these two equations respectively by and , and integrating by parts one finds : ^ 2-\int_0^t ds\ , z^{\text{\tiny{r}}}(s)\int_0^t dr\ , d(s , r)z^{\text{\tiny{i}}}(r)&= & 0\,,\\ \int_0^t ds \left[(z^{\text{\tiny{i}}})'(s)\right]^2+\int_0^t ds\,z^{\text{\tiny{i}}}(s)\int_0^t dr\ , d(s , r)z^{\text{\tiny{r}}}(r ) & = & 0\,.\end{aligned}\ ] ] exploiting the symmetry of in its variables , we can sum the two equations , obtaining : =0\,;\ ] ] this implies that and are constants and , applying the boundary conditions , these constants are equal to zero .we can then state that eq. admits only the trivial solution .this proves existence and uniqueness of solutions for that eq . , and thus for eq .
|
we analyze the non - markovian dynamics of a quantum system subject to spontaneous collapse in space . after having proved , under suitable conditions , the separation of the center - of - mass and relative motions , we focus our analysis on the time evolution of the center of mass of an isolated system ( free particle case ) . we compute the explicit expression of the green s function , for a generic gaussian noise , and analyze in detail the case of an exponential correlation function . we study the time evolution of average quantities , such as the mean position , momentum and energy . we eventually specialize to the case of gaussian wave functions , and prove that all basic facts about collapse models , which are known to be true in the white noise case , hold also in the more general case of non - markovian dynamics .
|
molecular dynamics ( md ) simulations trace the trajectory of thousands , millions , and even billions of particles over millions of timesteps , enabling materials science research at the atomistic level .such simulations are commonly run on highly parallel architectures , and take up a sizeable portion of the computing cycles provided by today s supercomputers . in this paper , we extend the lammps molecular dynamics simulator with a new , optimized and portable implementation of the tersoff multi - body potential .+ in many simulations , interactions among particles are assumed to occur in a pair - wise fashion ( particle - to - particle ) , as dictated by potentials such as the coulomb or lennard - jones ones . however , several applications in materials science require multi - body potential formulations . with these ,the force between two particles does not depend solely on their distance , but also on the relative position of surrounding particles .this added degree of freedom in the potential enables more accurate modeling but comes at the cost of additional complexity in its evaluation . because of this complexity , the optimization of multi - body potential is still largely unexplored . by design ,lammps main parallelization scheme is an mpi - based domain decomposition ; this makes it possible to run on clusters and supercomputers alike , and to tackle large scale systems .optionally , lammps can also take advantage of shared - memory parallelism via openmp . however , support for vectorization is limited to intel architectures and only to a few kernels .given the well - established and well - studied mechanisms for parallelism in md codes , our efforts mostly focus on vectorization , as a further step to fully exploit the computational power of the available hardware .on current architectures , simd processing contributes greatly to the system s peak performance ; simd units offer hardware manufacturers a way to multiply performance for compute - intense applications .this principle is most pronounced in accelerators such as intel s xeon phis and nvidia s teslas .several successful open - source md codesgromacs , namd , lammps , and ls1 mardyn already take advantage of vectorization in their most central kernels. however , these vectorized kernels usually do not include multi - body potentials .implementation methods for the kernels vary between hand - written assembly , intrinsics ( compiler - provided functions that closely map to machine instructions ) , and annotations that guide the compiler s optimization .furthermore , the majority of these kernels are not readily portable to different architectures ; on the contrary , for each target architecture , separate optimizations are required .our objective is an approach sufficiently general to attain high performance on a wide variety of architectures , and that requires changes localized in few , general building blocks .we identified such a solution for the tersoff potential .when transitioning from one architecture to another , the numerical algorithm stays fixed , and only the interface to the surrounding software components ( memory management , pre - processing ) needs to be tailored .we demonstrate our approach for performance portability on a range of architectures , including arm , westmere to broadwell , nvidia s teslas ( from the kepler generation ) , and two generations of intel xeon phi ( knights corner and knights landing ) . as already mentioned , the numerical algorithm built on top of platform - specific building blocks stays the same across all architectures ; it is the building blocks that are implemented ( once and for all ) for each of the instruction sets .our evaluation ranges from single - threaded to a cluster of nodes containing two xeon phi s . depending on architecture and benchmark , we report speedups ranging from 2x to 8x .[ [ related - work ] ] related work + + + + + + + + + + + + in addition to the aforementioned lammps , several other md codes are available , including gromacs , namd , dl_poly2 , and ls1_mardyn .gromacs is well known for its single - threaded performance , provides support for the xeon phi , and already contains a highly portable scheme for the vectorization of pair potentials .all the other softwares also contain routines specific to certain platforms such as the phi or cuda .lammps is a simulator written in c++ and designed to favor extensibility .it excels at scalability using mpi , and comes with a number of optional packages to run on a different platforms with different parallel paradigms .lammps supports openmp shared - memory programming , gpus , kokkos , and vector - enabled intel hardware .the vectorization of pair - potential calculations was specifically addressed in an md application called minimd ( proxy for the short - ranged portion of lammps ) .the target of that work are various x86 instruction set extensions , including the xeon phi s imci , and the optimization of cache access patterns. there have been efforts to speed up pair potentials on gpus ; these techniques are similar to what one would use to achieve vectorization , and the pattern of communication between the gpu and the host is similar to what is needed to achieve high - performance with the first generation of the xeon phi . in general , gpus have been used to great effect for speeding up md simulations .there exist implementations for gpus that support multi - body potentials such as eam , stillinger - weber and tersoff .as opposed to our work , the tersoff implementation for the gpu requires explicit neighbor assignments and thus is only suitable for rigid problems ; by constrast , our approach is suitable for general scenarios . in imd ,scalar optimizations similar to those we describe here are implemented ; however , no sort of vectorization is included .our work is in part based on previous efforts to port md simulations , and lammps in particular , to the xeon phi ; specifically , we use the same data and computation management scheme .[ [ organization - of - the - paper ] ] organization of the paper + + + + + + + + + + + + + + + + + + + + + + + + + in sec . [sec : md - over ] , we give a quick introduction to md simulations in general , while a discussion specific to the tersoff potential and its computational challenges comes in sec .[ sec : ters ] .we introduce our optimizations and vectorization schemes in sec .[ sec : opt ] , and then describe the techniques to achieve portability in sec .[ sec : impl ] . in sec .[ sec : res ] , we provide the results of our multi - platform evaluations , and in sec .[ sec : conc ] we draw conclusions . [[ open - source ] ] open source + + + + + + + + + + + the associated code is available at .a typical md simulation consists of a set of atoms ( particles ) and a sequence of timesteps . at each timestep ,the forces for each atom are calculated , and velocity and position are updated accordingly .the forces are modeled by a potential that depends solely on the positions of each atom . represents the potential energy of the system ; the force on an atom is then the negative derivative of with respect to the atom s position : most non - bonded potentials , such as lennard - jones and coulomb , are pair potentials : as such , they can be expressed as a double sum over the atoms , where the additive term depends only on the relative distance between the atoms : in practice , eq .[ eqn : pair - pot ] is computed by limiting the inner summation ( ) only to the set of atoms as the `` neighbor list''that are within a certain distance from atom : this simplification is based on the assumption that goes to zero as the distance increases .the assumption is valid for most pair potentials , even though some have to be augmented using long - ranged calculation schemes . with this second formulation , the complexity for the computation of decreases from quadratic ( in the number of atoms ) to linear , thus making large - scale simulations feasible .algorithm [ algo : general - pair ] illustrates how , based on a potential as given in eq .[ eqn : pot - neigh ] , the potential energy and the forces on each atom can be evaluated .as opposed to pair potentials , multi - body potentials deviate from the form of eq .[ eqn : pair - pot ] . in particular , is replaced by a term that depends on more than just the distance .instead , it might depend also on the distance of other atoms close to atom or , and on the angle between , and the surrounding atoms . omitting trivial definitions , the tersoff potential defined as follows : }^{v(i , j , \zeta_{ij})},\label{eqn : ters-1}\\ b_{ij } & = \vphantom{\sum_{r_j}}(1 + \beta^\eta\zeta_{ij}^\eta)^{-\frac{1}{2\eta } } , \eta\in\mathbb{r}\label{eqn : ters-2},\\ \zeta_{ij } & = \vphantom{\sum_{r_j}}\sum_{k\in\mathcal{n}_i\setminus\{j\ } } \underbrace{f_c(r_{ik } ) g(\theta_{ijk } ) \exp(\lambda_3 ( r_{ij } - r_{ik}))}_{\zeta(i , j , k)}.\label{eqn : ters-3}\end{aligned}\ ] ] eq .[ eqn : ters-1 ] indicates that two forces act between each pair of atoms : an attractive force modeled by , and a repulsive force modeled by .both depend only on the distance between atom and atom .the bond - order factor , defined by eq .[ eqn : ters-2 ] , however , is a scalar that depends on all the other atoms in the neighbor list of atom , by means of their distance , and angle via ( from eq .[ eqn : ters-3 ] ) .since the contribution of the pair depends on other atoms , tersoff is a multi - body potential . is a cutoff function , smoothly transitioning from 1 to 0 ; describes the influence of the angle on the potential ; all other symbols in eq .[ eqn : ters-1][eqn : ters-3 ] are parameters that were empirically determined by fitting to known properties of the modelled material . although these parameters mean that many lookups are necessary , the functions within the potential ( ) are expensive to compute , thus making the tersoff potential a good target for vectorization .[ eqn : ters-1][eqn : ters-3 ] give rise to a triple summation ; this is mirrored by the triple loop structure of algorithm [ algo : ters - lammps ] , which describes the implementation found in lammps in terms of the functions and , and calculates forces and potential energy : for all pairs of atoms , first is accumulated , and then the forces are updated in two stages , first with the contribution of the term , and finally with the contributions of the terms . for the following discussions , it is important to keep in mind the loop structure of algorithm [ algo : ters - lammps ] : it consists of an outer loop over all atoms ( denoted by the capital letter ) , an inner loop over a neighbor list ( denoted by the capital letter ) , and inside the latter , two more loops over the same neighbor list ( denoted by the capital letter ) . as opposed to pair potentials ,many - body potentials are used with extremely short neighbor lists . in a representative simulation run , rarely contains more than four atoms . assuming that the size of is , the algorithm accesses the atoms in a total of times . in practice , constructing the neighbor list on every timestep would be too expensive . instead, the cutoff radius is extended by a so - called `` skin distance '' .because atoms only move a certain distance per timestep , one can guarantee that no atom enters or exits the cutoff region for a certain number of timesteps by tracking all atoms also within the skin distance .consequently , the neighbor list also only needs to be rebuild after this many steps .we denote the extended neighbor list by instead of .given that the tersoff potential incorporates a cutoff function , the mathematical formulation is equivalent no matter if iterating through or .nevertheless , as little computation as possible should be performed on skin atoms . efficiently excluding skin atomsis one of the major challenges for vectorization .this section discusses the various optimizations that we applied to the algorithm described in the previous section .some are inherited from the libraries that we integrate with ( user - intel and kokkos ) , such as optimized neighbor list build , time integration , and data access ( e.g. alignment , packing , atomics ) .these optimizations are generic in that they apply to any potential that uses that particular package .we devised several other optimizations which are instead specific to the tersoff potential ; they are detailed here . 1 . scalar optimizations. these improvements are useful whether one vectorizes or not .we improve parameter lookup by reducing indirection , and eliminate redundant calculation by inlining function calls .this group of optimizations also includes the method described in sec .[ ssec : prec ] , which aims to remove redundant calculations of the term .2 . vectorization .we discuss details of our vectorization strategy in sec .[ ssec : vect ] , where we present different schemes , and describe their effectiveness for various vector lengths .optimizations that aid vectorization . as described in sec .[ ssec : avoid - mask ] and [ ssec : fil ] , we aim at reduce the waste of computing resources on skin atoms .the first optimization we discuss consists in restructuring the algorithm so that and its derivatives are computed only once , in the first loop , and the product with is only performed in the second loop . since and its derivatives naturally share terms , this modification has a measurable impact on performance .indeed , can be calculated from intermediate results of the derivative evaluation at the cost of just one additional multiplication .however , the computation of the derivatives in the first loop over requires additional storage : while the derivatives with respect to the positions of atoms and can be accumulated , the derivatives for have to be stored separately , as they belong to different s . in our implementation , this list can contain up to a specified number of elements .should more than elements be necessary , the algorithm falls back to the original scheme , thus maintaining complete generality .algorithm [ algo : deriv ] implements this idea . ; ; ; .3 ( 1,4 ) rectangle ( 1 + 1,4 + 1 ) ; ( 1.1 , 4.1 ) ( 1.9 , 4.9 ) ; at ( 1.3 , 4.7 ) ; at ( 1.7 , 4.3 ) ; ( 2,4 ) rectangle ( 2 + 1,4 + 1 ) ; ( 2.1 , 4.1 ) ( 2.9 , 4.9 ) ; at ( 2.3 , 4.7 ) ; at ( 2.7 , 4.3 ) ; ( 3,4 ) rectangle ( 3 + 1,4 + 1 ) ; ( 3.1 , 4.1 ) ( 3.9 , 4.9 ) ; at ( 3.3 , 4.7 ) ; at ( 3.7 , 4.3 ) ; ( 4,4 ) rectangle ( 4 + 1,4 + 1 ) ; ( 4.1 , 4.1 ) ( 4.9 , 4.9 ) ; at ( 4.3 , 4.7 ) ; at ( 4.7 , 4.3 ) ; ( 1,3 ) rectangle ( 1 + 1,3 + 1 ) ; ( 1.1 , 3.1 ) ( 1.9 , 3.9 ) ; at ( 1.3 , 3.7 ) ; at ( 1.7 , 3.3 ) ; ( 2,3 ) rectangle ( 2 + 1,3 + 1 ) ; ( 2.1 , 3.1 ) ( 2.9 , 3.9 ) ; at ( 2.3 , 3.7 ) ; at ( 2.7 , 3.3 ) ; ( 3,3 ) rectangle ( 3 + 1,3 + 1 ) ; ( 3.1 , 3.1 ) ( 3.9 , 3.9 ) ; at ( 3.3 , 3.7 ) ; at ( 3.7 , 3.3 ) ; ( 4,3 ) rectangle ( 4 + 1,3 + 1 ) ; ( 4.1 , 3.1 ) ( 4.9 , 3.9 ) ; at ( 4.3 , 3.7 ) ; at ( 4.7 , 3.3 ) ; ( 1,2 ) rectangle ( 1 + 1,2 + 1 ) ; ( 1.1 , 2.1 ) ( 1.9 , 2.9 ) ; at ( 1.3 , 2.7 ) ; at ( 1.7 , 2.3 ) ; ( 2,2 ) rectangle ( 2 + 1,2 + 1 ) ; ( 2.1 , 2.1 ) ( 2.9 , 2.9 ) ; at ( 2.3 , 2.7 ) ; at ( 2.7 , 2.3 ) ; ( 3,2 ) rectangle ( 3 + 1,2 + 1 ) ; ( 3.1 , 2.1 ) ( 3.9 , 2.9 ) ; at ( 3.3 , 2.7 ) ; at ( 3.7 , 2.3 ) ; ( 4,2 ) rectangle ( 4 + 1,2 + 1 ) ; ( 4.1 , 2.1 ) ( 4.9 , 2.9 ) ; at ( 4.3 , 2.7 ) ; at ( 4.7 , 2.3 ) ; ( 1,1 ) rectangle ( 1 + 1,1 + 1 ) ; ( 1.1 , 1.1 ) ( 1.9 , 1.9 ) ; at ( 1.3 , 1.7 ) ; at ( 1.7 , 1.3 ) ; ( 2,1 ) rectangle ( 2 + 1,1 + 1 ) ; ( 2.1 , 1.1 ) ( 2.9 , 1.9 ) ; at ( 2.3 , 1.7 ) ; at ( 2.7 , 1.3 ) ; ( 3,1 ) rectangle ( 3 + 1,1 + 1 ) ; ( 3.1 , 1.1 ) ( 3.9 , 1.9 ) ; at ( 3.3 , 1.7 ) ; at ( 3.7 , 1.3 ) ; ( 4,1 ) rectangle ( 4 + 1,1 + 1 ) ; ( 4.1 , 1.1 ) ( 4.9 , 1.9 ) ; at ( 4.3 , 1.7 ) ; at ( 4.7 , 1.3 ) ; ( 0.8,1.0 ) ( 0.8,5.0 ) node [ black , midway , xshift=-0.6cm , rotate=90 ] parallelism ; ( 1.0,5.2 ) ( 5.0,5.2 ) node [ black , midway , yshift=0.6 cm ] vectorization ; .3 ( 1,4 ) rectangle ( 1 + 1,4 + 1 ) ; ( 1.1 , 4.1 ) ( 1.9 , 4.9 ) ; at ( 1.3 , 4.7 ) ; at ( 1.7 , 4.3 ) ; ( 2,4 ) rectangle ( 2 + 1,4 + 1 ) ; ( 2.1 , 4.1 ) ( 2.9 , 4.9 ) ; at ( 2.3 , 4.7 ) ; at ( 2.7 , 4.3 ) ; ( 3,4 ) rectangle ( 3 + 1,4 + 1 ) ; ( 3.1 , 4.1 ) ( 3.9 , 4.9 ) ; at ( 3.3 , 4.7 ) ; at ( 3.7 , 4.3 ) ; ( 4,4 ) rectangle ( 4 + 1,4 + 1 ) ; ( 4.1 , 4.1 ) ( 4.9 , 4.9 ) ; at ( 4.3 , 4.7 ) ; at ( 4.7 , 4.3 ) ; ( 1,3 ) rectangle ( 1 + 1,3 + 1 ) ; ( 1.1 , 3.1 ) ( 1.9 , 3.9 ) ; at ( 1.3 , 3.7 ) ; at ( 1.7 , 3.3 ) ; ( 2,3 ) rectangle ( 2 + 1,3 + 1 ) ; ( 2.1 , 3.1 ) ( 2.9 , 3.9 ) ; at ( 2.3 , 3.7 ) ; at ( 2.7 , 3.3 ) ; ( 3,3 ) rectangle ( 3 + 1,3 + 1 ) ; ( 3.1 , 3.1 ) ( 3.9 , 3.9 ) ; at ( 3.3 , 3.7 ) ; at ( 3.7 , 3.3 ) ; ( 4,3 ) rectangle ( 4 + 1,3 + 1 ) ; ( 4.1 , 3.1 ) ( 4.9 , 3.9 ) ; at ( 4.3 , 3.7 ) ; at ( 4.7 , 3.3 ) ; ( 1,2 ) rectangle ( 1 + 1,2 + 1 ) ; ( 1.1 , 2.1 ) ( 1.9 , 2.9 ) ; at ( 1.3 , 2.7 ) ; at ( 1.7 , 2.3 ) ; ( 2,2 ) rectangle ( 2 + 1,2 + 1 ) ; ( 2.1 , 2.1 ) ( 2.9 , 2.9 ) ; at ( 2.3 , 2.7 ) ; at ( 2.7 , 2.3 ) ; ( 3,2 ) rectangle ( 3 + 1,2 + 1 ) ; ( 3.1 , 2.1 ) ( 3.9 , 2.9 ) ; at ( 3.3 , 2.7 ) ; at ( 3.7 , 2.3 ) ; ( 4,2 ) rectangle ( 4 + 1,2 + 1 ) ; ( 4.1 , 2.1 ) ( 4.9 , 2.9 ) ; at ( 4.3 , 2.7 ) ; at ( 4.7 , 2.3 ) ; ( 1,1 ) rectangle ( 1 + 1,1 + 1 ) ; ( 1.1 , 1.1 ) ( 1.9 , 1.9 ) ; at ( 1.3 , 1.7 ) ; at ( 1.7 , 1.3 ) ; ( 2,1 ) rectangle ( 2 + 1,1 + 1 ) ; ( 2.1 , 1.1 ) ( 2.9 , 1.9 ) ; at ( 2.3 , 1.7 ) ; at ( 2.7 , 1.3 ) ; ( 3,1 ) rectangle ( 3 + 1,1 + 1 ) ; ( 3.1 , 1.1 ) ( 3.9 , 1.9 ) ; at ( 3.3 , 1.7 ) ; at ( 3.7 , 1.3 ) ; ( 4,1 ) rectangle ( 4 + 1,1 + 1 ) ; ( 4.1 , 1.1 ) ( 4.9 , 1.9 ) ; at ( 4.3 , 1.7 ) ; at ( 4.7 , 1.3 ) ; ( 0.8,1.0 ) ( 0.8,5.0 ) node [ black , midway , xshift=-0.6cm , rotate=90 ] parallelism ; ( 1.0,5.2 ) ( 5.0,5.2 ) node [ black , midway , yshift=0.6 cm ] vectorization ; .3 ( 1,4 ) rectangle node ( 1 + 1,4 + 1 ) ; ( 2,4 ) rectangle node ( 2 + 1,4 + 1 ) ; ( 3,4 ) rectangle node ( 3 + 1,4 + 1 ) ; ( 4,4 ) rectangle node ( 4 + 1,4 + 1 ) ; ( 1,3 ) rectangle node ( 1 + 1,3 + 1 ) ; ( 2,3 ) rectangle node ( 2 + 1,3 + 1 ) ; ( 3,3 ) rectangle node ( 3 + 1,3 + 1 ) ; ( 4,3 ) rectangle node ( 4 + 1,3 + 1 ) ; ( 1,2 ) rectangle node ( 1 + 1,2 + 1 ) ; ( 2,2 ) rectangle node ( 2 + 1,2 + 1 ) ; ( 3,2 ) rectangle node ( 3 + 1,2 + 1 ) ; ( 4,2 ) rectangle node ( 4 + 1,2 + 1 ) ; ( 1,1 ) rectangle node ( 1 + 1,1 + 1 ) ; ( 2,1 ) rectangle node ( 2 + 1,1 + 1 ) ; ( 3,1 ) rectangle node ( 3 + 1,1 + 1 ) ; ( 4,1 ) rectangle node ( 4 + 1,1 + 1 ) ; ( 0.8,1.0 ) ( 0.8,5.0 ) node [ black , midway , xshift=-0.6cm , rotate=90 ] parallelism ; ( 1.0,5.2 ) ( 5.0,5.2 ) node [ black , midway , yshift=0.6 cm ] vectorization ; in alg .[ algo : deriv ] ( and alg .[ algo : ters - lammps ] ) the iteration space ultimately is three - dimensional , corresponding to the three nested loops , and .this space needs to be mapped onto the available execution schemes , that is , data parallelism , parallel execution , and sequential execution .we propose three different mappings that are useful in different scenarios .for all of them , it is convenient to map the dimension onto sequential execution , because values calculated in the loop , the s , are used in the surrounding loop , and data computed in the loop , i.e. , is then used within the second loop .therefore , the problem boils down to mapping the and dimensions onto a combination of parallel execution , data parallelism , and if necessary , sequential execution . in our reasoning, we assume that the amount of available data - parallelism is unlimited .in practice , the program sequentially executes chunks , and each chunk takes advantage of data parallelism . as shown in fig .[ fig : map ] , to perform the mapping sensibly we propose three schemes : * is mapped to parallel execution , and to data parallelism .* is mapped to parallel execution , and and to data parallelism . * is mapped to parallel execution and data parallelism , and to sequential execution .scheme ( [ sfig : vec-1 ] ) is natural for vector architectures with short vectors , such as single precision sse and double precision avx . in these, it makes sense to map the iterations directly to vector lanes , as there is a good match among them : 3 - 4 iterations to 4 vector lanes .this scheme is most commonly used to vectorize pair potentials .the advantage of this approach is that the atom is constant across all lanes . while performing iterations in through the neighbor list of atom , the same neighbor list is traversed across all lanes , leading to an efficient vectorization .however , with long vectors and short neighbor lists , this approach is destined to fail on accelerators and cpus with long vectors .scheme ( [ sfig : vec-2 ] ) is best suited for vector widths ( 8 or 16 ) that exceed the iteration count of , as it handles the shortcomings of ( [ sfig : vec-2 ] ) . with this approach , iterations of and fused , and the fused loop is used for data parallelism . given that contains many iterations ( as many as atoms in the system ) , this scheme achieves an unlimited potential for data parallelism .however , in contrast to ( [ sfig : vec-1 ] ) , atom is not constant across all lanes ; consequently , the innermost loops iterates over the neighbor lists of different , leading to a more involved iteration scheme .even if this iteration is efficient , it can not attain the same performance of an iteration scheme where all vector lanes iterate over the same neighbor list .the vectorization of the loop invalidated a number of assumptions of the algorithm : and are always identical across all lanes , while s , coming from the same neighbor list , are always distinct . without these assumptions , special care has to be taken when accumulating the forces to avoid conflicts . for the program to be correct under all circumstances ,the updates have to be serialized . in the future ,avx-512 conflict detection support may change this . whether the disadvantages of scheme ( [ sfig : vec-2 ] ) outweigh its advantages or not is primarily a question of amortization .the answer depends on the used floating point data type , the vector length , and the features of the underlying instruction set .scheme ( [ sfig : vec-3 ] ) is the natural model for the gpu , where data parallelism and parallel execution are blurred together .an iteration is assigned to each thread , and the thread sequentially works through the iterations . to implement these schemes ,the algorithms are split into two components : a `` computational '' one , and a `` filter '' .the computational component carries out the numerical calculations , including the innermost loop and the updates to force and energy ; the input to this component are pairs of and for which the force and energy calculations are to be carried out .given that the majority of the runtime is spent in computation , this is the part of the algorithm that has to be vectorized .the filter component is instead responsible to feed work to the computational one ; its duty is to determine which pairs to pass . to this end, the data is filtered to make sure that work is assigned to as many vector lanes as possible before entering the vectorized part .this means that the interactions outside of the cutoff region never even reach the computational component .0.5 loop : green is ready - to - compute , red is not - ready - to - compute , and blue is actual calculation . the left is an unoptimized variant , where calculation takes place as soon as at least one lane is ready - to - compute , whereas on the right , the calculation is delayed until all lanes are ready - to - compute ., title="fig : " ] 0.5 loop : green is ready - to - compute , red is not - ready - to - compute , and blue is actual calculation . the left is an unoptimized variant , where calculation takes place as soon as at least one lane is ready - to - compute , whereas on the right , the calculation is delayed until all lanes are ready - to - compute ., title="fig : " ] section [ ssec : vect ] just described how we avoid calculation for skin atoms in the loop .the remaining issue is how we skip them in the loop .the same argument , that resources must not be wasted on calculation that does not contribute to the final result , applies here , too . as such , as many vector lanes as possible need to be active before entering numerical kernels such as those computing and .these computational kernels are almost entirely straight - line floating - point intense code , with some lookups for potential parameters in between .this optimization is most important for scheme ( [ sfig : vec-2 ] ) and ( [ sfig : vec-3 ] ) , as they traverse multiple neighbor lists in parallel .as such , no guarantee can be made that interacting atoms have the same position in all neighbor lists .this leads to sparse masks for the compute kernel : for example , in a typical simulation that uses a vector length of sixteen , no more than four lanes will be active at a time .on gpus , this effect is even worse , where 95% of the threads in a warp might be inactive .our optimization extends scheme ( [ sfig : vec-2 ] ) and ( [ sfig : vec-3 ] ) to fast forward through the loop , until as many lanes as possible can participate in the calculation of a numerical kernel .the idea is that the neighbor list is not traversed at equal speed for all the vector lanes ; instead , we manipulate the iteration index independently in the various lanes .[ fig : avoid ] visualizes the way this modification affects the behaviour of the algorithm . in that figure, the shade of the particular color roughly corresponds to that lane s progress through the loop .notice that on the left , calculation ( blue ) takes place as soon as at least one lane requires that calculation ( green ) .instead , on the right , a lane that is ready to compute ( green ) , idles ( does not change its shade ) , while the other lanes make further progress ( going through shades of red ) in search of the iteration where they become ready ( green ) . in our implementation, the calculation ( blue ) only takes place if all lanes are ready ( green ) .effectively , we `` fast - forward '' in each lane , until all of them have are ready to compute .to implement the idea from sec .[ ssec : avoid - mask ] , a lot of masking is necessary , because the subset of lanes that have to progress when fast - forwarding changes every time . on platforms wheremasking has non - trivial overhead , performance can be further optimized .observe that in fig .[ fig : avoid ] , lanes `` spin '' until computation is available .we can reduce the amount of spinning by filtering the neighbor list in the scalar segment of the program . to ensure correctness, the filtering is based on the maximum cutoff of all the types of atoms in the system .this means that atoms that physically play a role can not be accidentally excluded from the calculation .filtering with any other cutoff might lead to incorrect results in systems with multiple kinds of atoms , if the cutoff prescribed between any two atom kinds differs . filteringthe neighbor list is especially effective with avx , where the double precision implementation uses the mapping ( [ sfig : vec-1 ] ) , whereas the single precision variant uses ( [ sfig : vec-2 ] ) . without this change ,the overhead to spin is too big to lead to speedups with respect to the double precision version . with this change , most time is again spent in the numerical part of the algorithm .so far we kept the description of the algorithms as generic as possible ; in this section , we cover the actual implementation of the schemes and optimizations from the previous section , and their integration into lammps . from a software engineering standpoint ,the main challenge was to make the implementation maintainable , while achieving portable performance .to this end , openmp 4.5 s simd extensions would be the most appealing solution , but right now lack both compiler support and a number of critical features that are required for our implementation .we resorted to implementing these features ourself as modular and portable building blocks . in the following, we first introduce these building blocks , and then focus on a platform independent implementation .finally , we characterize the different execution modes supported by our code .we identified four groups of building blocks necessary for a portable implementation .\(1 ) vector - wide conditionals .these conditionals check if a condition is true across all vector lanes ; since either all or no lanes enter these conditionals , excessive masking is prevented .\(2 ) reduction operations . these are useful when all lanes accumulate to the same ( uniform - across - lanes ) memory location . in these cases , the reduction can be performed in - register , and only the accumulated value is written to memory .this behaviour can not be achieved with openmp s reduction clause , since it only supports reductions in which the target memory location is known a - priori , while this is not the case for our code .\(3 ) conflict write handling .this feature allows vector code to write to non distinct memory locations ( see the discussion in the previous bullet ) . in the vectorization of md codes, it is often guaranteed that all lanes write to distinct memory locations ( since the atoms in a neighbor list are all distinct ) , which is the assumption that compilers typically make when performing user - specified vectorization .unfortunately , this guarantee does not hold for scheme ( [ sfig : vec-2 ] ) . by serializing the accesses ,the ` ordered simd ` clause of the openmp 4.5 standard provides a solution to this issue ; however , at time of writing , this directive is not yet supported by any major compiler .it is also questionable whether this approach will be `` future proof '' or not , as a conflict detection mechanism such as that in the avx-512 extensions might make serialization unnecessary .\(4 ) adjacent gather optimizations .these provide improved performance on systems that do not support a native gather instruction or where such an instruction has a high latency .an adjacent gather is a sequence of gather operations that access adjacent memory locations . instead of using gather instructions or gather emulations here, it is possible to load continuously from memory into registers , and then permute the data in - register .this operation can lead to significant performance improvements in our code , because adjacent gathers are necessary to load the parameters of our potential ; it is also important for backwards - compatibility reasons , because old systems lack efficient native gather operations . since our objective is to integrate with the lammps md simulator , support for different instruction sets and for different floating point precisions is necessary .it is crucial to support cpu instruction sets to balance the load between host and accelerator .additionally , such an abstraction enables us to evaluate the influence of vector lengths and instruction set features on performance . considering all combinations of instruction sets , data types and vectorization variants ,it becomes clear that it is infeasible to implement everything with intrinsics .we created a single algorithm , and paired it with a vectorization back - end .as a consequence , instead of coding the tersoff potential s algorithm times ( architectures and precision modes ) , we only had to implement the building blocks for the vectorization back - end .some of these building blocks provide the features described in sec .[ ssec : openmp ] , while others provide one - to - one mappings to intrinsics , mostly unary and binary operators .the vectorization back - end uses c++ templates , which are specialized for each targeted architecture .we developed back - ends for single , double and mixed precision using a variety of instruction set extensions : scalar , sse4.2 , avx , avx2 , imci ( the xeon phi knights corner instruction set ) , as well as experimental support for avx-512 , cilk array notation and cuda .the library is designed to be easily extended to new architectures . even though the tuning might take some time , it is simplified by the fact that a number of building blocks , such as wide adjacent - gather operations , can be optimized in one go .contrary to most other vector libraries , which allow the programmer to pick a vector length that may be emulated internally , our library only allows for algorithms that are oblivious of the used vector length .we use vanilla lammps mpi - based domain decomposition scheme and build upon optional packages that offer various optimizations and capabilities .specifically , all our x86 and arm implementations use the user - intel package , which collects optimizations for intel hardware , to manage offloading to the xeon phi , data - packing , alignment and simulation orchestration . for the gpu implementation, the same role is fulfilled by the kokkos package .since kokkos abstracts the data layout of the variables used in a simulation ( e.g. position , velocity , mass , force ) , the code needs to be changed wherever data is accessed from memory .we also need to change the routine that feeds our algorithm , to conform with the model of parallelism that is used by kokkos . as a consequence ,comparisons between x86 and arm , and between x86 and the gpu implementation can not reasonably be drawn .furthermore , the kokkos package is still under development , while the user - intel package is more mature ; as such , we believe that the gpu results are likely to have room for improvement . in addition to double precision , which is the default in lammps , we created versions that compute the tersoff potential in single and mixed precision . in order to validate these two implementations that use reduced precision , we measured the energy in a long - running simulation .[ fig : acc ] illustrates , for a system of 32000 atoms , the deviation is within 0.002% of the reference . in part , this effect can be explained by the short neighbor lists which are characteristic of the tersoff potential : since only few different atoms interact with any given atom , and only these accumulate their contributions to the force , there is little chance for round - off error to accumulate . in the following section , we present performance results for several hardware platforms , and four different codes : ref , opt - d , opt - s , opt - m .[ [ ref ] ] ref + + + the reference for our optimization and testing is the implementation shipped with lammps itself , which performs all the calculations in double precision . [[ opt - d ] ] opt - d + + + + + the most accurate version of our code , which performs the calculations in double precision .it includes both the optimizations due to scalar improvements , and those due to vectorization .[ [ opt - s ] ] opt - s + + + + + the least accurate version of our code , implemented entirely in single precision . as for opt - d , it includes both scalar improvements , and takes advantage of vectorization .the vector length typically is twice that of opt - d . referring to sec .[ ssec : acc ] , the accuracy of the single precision solver is perfectly in line with the one offered by _ opt - d _ or _ref_. [ [ opt - m ] ] opt - m + + + + + we also provide a mixed precision version of our code .this version performs all the calculations in single precision , except for accumulations .it is the default mode for code of the user - intel package , as it offers a compromise between speed and accuracy . from an software engineering perspective , the mixed precision version costs very little , as its can leverage the existing single and double precision codes ; indeed , our vector library performs this step ( from single and double implementation to mixed implementation ) automatically .in addition to these four modes , the algorithms are run on a single thread ( _ 1 t _ ) or on an entire node ( _ 1n _ ) .the single - threaded run gives the most pure representation of the speedup obtained by our optimizations ; the results for an entire node ( _ 1n _ ) and a cluster instead give a realistic assessment of the expected speedup in real - world applications .such parallel runs use mpi , as provided by lammps itself ; as a consequence , the parallelization scheme used in _ ref _ and _ opt _ is the same .in this section , we validate the effectiveness of our optimizations for the tersoff potential by presenting experimental results on a variety of hardware platforms , ranging from a low - power arm to the second generation xeon phi . as a test case, we use a standard lammps benchmark for the simulation of silicon atoms ; since the atoms are laid out in a regular lattice so that each of them has exactly four nearest neighbors , this test case captures well the scenario of small neighbor lists discussed in sec .[ sec : ters ] and sec .[ sec : opt ] .we start by presenting single - threaded and single - node results for the cpus ( and the respective instruction sets ) listed in table [ tbl : cpu - hw ] ; we continue with measurements for two gpus ( table [ tbl : gpu - hw ] ) , and conclude with results for the xeon phi ( table [ tbl : phi - hw ] ) , in number of configurations .we also present data for a cluster of nodes , to demonstrate the degree to which the scalar and vector improvements lead to performance at scale ..hardware used for cpu benchmarks . [ cols="<,<,<,<",options="header " , ] we conclude with a discussion of the portability of our optimizations on two generations of intel xeon phi accelerators``knights corner '' ( knc ) , and `` knights landing '' ( knl)scaling from a single accelerator to a cluster .[ fig : speedup - phi ] measures the impact of our optimizations while using all cores of a xeon phi accelerator . for a fair comparison, the benchmark is run on the device , without any involvement of the host in the calculation .on both platforms , the speedup of opt - m with respect to ref is roughly 5x .single - threaded measurements ( not reported ) indicate that the `` pure '' speedup in the kernel is even higher , at approximately 9x . with a relative performance improvement of about 3x ,the difference between knc and knl is in line with our expectations ; in fact , the theoretical peak performance also roughly tripled , along with the bandwidth , which roughly doubled .we point out that no optimization specific to knl was incorporated in our code ; the speedup was achieved by simply making sure that the vector abstraction complied with avx-512 .additional optimizations for knl might take advantage of the avx-512cd conflict detection instructions and different code for gather operations . to lead up to the scaling results across multiple xeon phi augmented nodes , fig .[ fig : nodes - phi ] measures the performance of individual such nodes as listed in table [ tbl : phi - hw ] . like in a real simulation ,the workload is shared among cpu and accelerator . giventhat our knl system is self - hosted , we include it in this list .the measurements for cpu+knc , include both the overheads incurred in a single node execution ( such as mpi and threading ) , and the overhead due to offloading . in view of the performance of the cpu - only systems relative to knc , these performance numbers are then plausible .a single knc delivers higher simulation speed than the cpu - only sb node ; however , a cpu - only hw node is more powerful than the knc , and thus also noticeably contributes to the combined performance . adding a second accelerator also seems to improve performance , as seen in the iv+2knc measurement .the knl system delivers higher performance than the combination of two first - generation xeon phis and two ivy bridge cpus .the question with any kind of serial improvement is `` will it translate to similar speedups in a ( highly ) parallel environment ? '' . in theory ,sequential improvements multiply with the performance achieved from parallelism ; in practice , a good chunk of those improvements are eaten away by a collection of overheads , and a realistic assessment can only be made from measurement .figure [ fig : supermic ] depicts results for up to eight nodes in a cluster of iv+2knc nodes . here ,overheads are not only due to the parallelism within a node , but also to communication among the nodes .the vector optimizations port to large scale computations seamlessly : without accelerator , the performance improvement for 196 mpi ranks is 2.5x ; when two accelerators are added per node , the performance improvement becomes 6.5x .coordinates ( knc , 0.526 ) ( knl , 1.382 ) ; coordinates ( knc , 2.475 ) [ 4.71x ] ( knl , 8.209 ) [ 5.94x ] ; coordinates ( sb+knc , 2.029 ) ( hw+knc , 4.358 ) ( iv+2knc , 6.184 ) ( knl , 8.209 ) ;we discussed the problem of calculating the tersoff potential efficiently and in a portable manner ; we described a number of optimization schemes , and validated their effectiveness by means of realistic use cases .we showed that vectorization can achieve considerable speedups also in scenarios such as a multi - body potential with a short neighbor list which do not immediately lend themselves to the simd paradigm . to achieve portability, it proved useful to isolate target - specific code into a library of abstract operations ; this separation of concerns leads to a clean division between the algorithm implementation and the hardware support for vectorization .it also makes it possible to map the vector paradigm to gpus , while attaining considerable speedup .the ideas behind our optimizations were described , and their effectiveness was validated by means of realistic use cases . indeed , we observe speedups between 2x and 3x on most cpus , and between 3x and 5x on accelerators ; performance scales also to clusters and clusters of accelerators .finally , we believe that the main success of this work lies in the achieved degree of cross - platform code reuse , and in the portability of the proposed optimizations ; combined , these two features lead to a success story with respect to performance portability . coordinates ( 1 , 0.155 ) ( 2 , 0.294 ) ( 4 , 0.608 ) ( 8 , 0.839 ) ; coordinates ( 1 , 0.394 ) ( 4 , 1.070 ) ( 8 , 2.096 ) ; coordinates ( 2 , 1.818 ) ( 4 , 3.356 ) ( 8 , 5.439 ) ;the authors gratefully acknowledge financial support from the deutsche forschungsgemeinschaft ( german research association ) through grant gsc 111 , and from intel via the intel parallel computing center initiative .we thank the rwth computing center and the leibniz rechenzentrum mnchen for computing resources to conduct this research .we would like to thank marcus schmidt for providing one of the benchmarks used in this work , and m. w. brown for conducting the benchmarks on the 2nd generation xeon phi hardware .1 s. plimpton , fast parallel algorithms for short - range molecular dynamics , j comp phys , 1995 .wolf et al , assessing the performance of openmp programs on the intel xeon phi , lecture notes in computer science , euro - par 2013 parallel processing , 2013 .brown et al , an evaluation of molecular dynamics performance on the hybrid cray xk6 supercomputer , procedia computer science , 2012 .brown at al , implementing molecular dynamics on hybrid high performance computers three - body potentials , computer physics communications , 2013 .hou et al , efficient gpu - accelerated molecular dynamics simulation of solid covalent crystals , computer physics communications , 2013 .brown et al , optimizing legacy molecular dynamics software with directive - based offload , computer physics communications , 2015 .tersoff , new empirical approach for the structure and energy of covalent systems , phys .heinecke et al , supercomputing for molecular dynamics simulations , springer international publishing , 2 - 15 .pll et al , tackling exascale software challenges in molecular dynamics with gromacs , solving software challenges for exascale , lecture notes in computer science , 2015 .tian et al , compiling c / c++ simd extensions for function and loop vectorization on multicore - simd processors , ipdpsw , 2012 .e. bitzek , et al , recent developments in imd : interactions for covalent and metallic systems , high performance computing in science and engineering 2000 , 2001 . j. roth et al , imd - a massively parallel molecular dynamics package for classical simulations in condensed matter physics , high performance computing in science and engineering 99 , 2000 . s. j. plimpton and a. p. thompson , computational aspects of many - body potentials , mrs bulletin , 37 , 2012 .berendsen , d. van der spoel , r. van drunen , gromacs : a message - passing parallel molecular dynamics implementation , computer physics communications , 1995 .mark james abraham , teemu murtola , roland schulz , szilrd pll , jeremy c. smith , berk hess , erik lindahl , gromacs : high performance molecular simulations through multi - level parallelism from laptops to supercomputers , softwarex , september 2015. h. carter edwards , christian r. trott , daniel sunderland , kokkos : enabling manycore performance portability through polymorphic memory access patterns , j. parallel distrib .2014 . w. m. brown , p. wang , s. j. plimpton , a. n. tharrington , implementing molecular dynamics on hybrid high performance computers - short range forces , comp phys comm , 2011 brown , w.m . , carrillo , j .- m.y . , gavhane , n. , thakkar , f.m . ,plimpton , s.j ., optimizing legacy molecular dynamics software with directive - based offload , computer physics communications , to appear .o. e. b. messer , e. dazevedo , j. hill , w. joubert , s. laosooksathit and a. tharrington , developing miniapps on modern platforms using multiple programming models , cluster computing ( cluster ) , 2015 ieee international conference on , 2015 . s. pll , b. hess , a flexible algorithm for calculating pair interactions on simd architectures , computer physics communications , 2013 . s. j. pennycook , c. j. hughes , m. smelyanskiy , s.a .jarvis , exploring simd for molecular dynamics , using intelxeonprocessors and intelxeon phi coprocessors , ipdps 13 , 2013 .j. a. anderson , c. d. lorenz , a. travesset , general purpose molecular dynamics simulations fully implemented on graphics processing units , journal of computational physics , 2008 .d. c. rapaport , enhanced molecular dynamics performance with a programmable graphics processor , computer physics communications , 2011 .z. fan , t. siro , a. harju , accelerated molecular dynamics force evaluation on graphics processing units for thermal conductivity calculations , computer physics communications , 2013 . c. hou , j. xu , p. wang , w. huang , x. wang , efficient gpu - accelerated molecular dynamics simulation of solid covalent crystals , computer physics communications , 2013 .j. c. phillips , r. braun , w. wang , j. gumbart , e. tajkhorshid , e. villa , c. chipot , r. d. skeel , l. kale , k. schulten , scalable molecular dynamics with namd , journal of computational chemistry , 2005 . c. niethammer , s. becker , m. bernreuther , m. buchholz , w. eckhardt , a. heinecke , s. werth , h .- j .bungartz , c. w. glass , h. hasse , j. vrabec , m. horsch , ls1 mardyn : the massively parallel molecular dynamics code for large systems , journal of chemical theory and computation , 2014 .w. smith , i. t. todorov , a short description of dl_poly , molecular simulation , 2006 .f. w. j. olver , d. w. lozier , r. f. boisvert , c. w. clark , editors , nist handbook of mathematical functions , cambridge university press , 2010 .m. p. allen , d. j. tildesley , computer simulation of liquids , oxford university press , 1987 . ` http://github.com/hpac/lammps-tersoff-vector ` .
|
molecular dynamics simulations , an indispensable research tool in computational chemistry and materials science , consume a significant portion of the supercomputing cycles around the world . we focus on multi - body potentials and aim at achieving performance portability . compared with well - studied pair potentials , multibody potentials deliver increased simulation accuracy but are too complex for effective compiler optimization . because of this , achieving cross - platform performance remains an open question . by abstracting from target architecture and computing precision , we develop a vectorization scheme applicable to both cpus and accelerators . we present results for the tersoff potential within the molecular dynamics code lammps on several architectures , demonstrating efficiency gains not only for computational kernels , but also for large - scale simulations . on a cluster of intel xeon phi s , our optimized solver is between 3 and 5 times faster than the pure mpi reference .
|
the connection between the statistical physics of disordered systems and optimization problems in computer science dates back from twenty years at least . in combinatorial optimization oneis given a cost function ( the length of a tour in the traveling salesman problem ( tsp ) , the number of violated constraints in constraint satisfaction problems , ) over a set of variables and looks for the minimal cost over an allowed range for those variables . finding the true minimum may be complicated , and requires bigger and bigger computational efforts as the number of variables to be minimized over increases .statistical physics is at first sight very different .the scope is to deduce the macroscopic , that is , global properties of a physical system , for instance a gas , a liquid or a solid , from the knowledge of the energetic interactions of its elementary components ( molecules , atoms or ions ) .however , at very low temperature , these elementary components are essentially forced to occupy the spatial conformation minimizing the global energy of the system .hence low temperature statistical physics can be seen as the search for minimizing a cost function whose expression reflects the laws of nature or , more humbly , the degree of accuracy retained in its description .this problem is generally not difficult to solve for non disordered systems where the lowest energy conformation are crystals in which components are regularly spaced from each other . yetthe presence of disorder , e.g. impurities , makes the problem very difficult and finding the conformation with minimal energy is a true optimization problem . at the beginning of the eighties , following the works of g. parisi and others on systems called spin glasses , important progresses were made in the statistical physics of disordered systems .those progresses made possible the quantitative study of the properties of systems given some distribution of the disorder ( for instance the location of impurities ) such as the average minimal energy and its fluctuations .the application to optimization problems was natural and led to beautiful studies on ( among others ) the average properties of the minimal tour length in the tsp , the minimal cost in bipartite matching , for some specific instance distributions .unfortunately statistical physicists and computer scientists did not establish close ties on a large scale at that time .the reason could have been of methodological nature .while physicists were making statistical statements , true for a given distribution of inputs , computer scientists were rather interested in solving one ( or several ) particular instances of a problem .the focus was thus on efficient ways to do so , that is , requiring a computational effort growing not too quickly with the number of data defining the instance .knowing precisely the typical properties for a given , academic distribution of instances did not help much to solve practical cases . at the beginning of the nineties practitionners in artificial intelligencerealized that classes of random constraint satisfaction problems used as artificial benchmarks for search algorithms exhibited abrupt changes of behaviour when some control parameter were finely tuned .the most celebrated example was random -satisfiability , where one looks for a solution to a set of random logical constraints over a set of boolean variables .it appeared that , for large sets of variables , there was a critical value of the number of constraints per variable below which there almost surely existed solutions , and above which solutions were absent .an important feature was that the performances of known search algorithms drastically worsened in the vicinity of this critical ratio .in addition to its intrinsic mathematical interest the random -sat problem was therefore worth to be studied for ` practical ' reasons . this critical phenomenon ,strongly reminiscent of phase transitions in condensed matter physics , led to a revival of the research at the interface between statistical physics and computer science , which is still very active .the purpose of the present review is to introduce the non physicist reader to some concepts required to understand the literature in the field and to present some major results .we shall in particular discuss the refined picture of the satisfiable phase put forward in statistical mechanics studies and the algorithmic approach ( survey propagation , an extension of belief propagation used in communication theory and statistical inference ) this picture suggested .while the presentation will mostly focus on the -satisfiability problem ( with random constraints ) we will occasionally discuss another computational problem , namely , linear systems of boolean equations . a good reason to do so is that this problem exhibits some essential features encountered in random -satisfiability , while being technically simpler to study . in additionit is closely related to error - correcting codes in communication theory .the chapter is divided into four main parts . in section [ sec_basic ]we present the basic statistical physics concepts necessary to understand the onset of phase transitions , and to characterize the nature of the phases .those are illustrated on a simple example of decision problem , the so - called perceptron problem . in section [ sec : phase_transitions ] we review the scenario of the various phase transitions taking place in random -sat .section [ sec_localsearch ] and [ sec_decimation ] present the techniques used to study various type of algorithms in optimization ( local search , backtracking procedures , message passing algorithms ) .we end up with some conclusive remarks in sec .[ sec_conclu ] .for pedagogical reasons we first discuss a simple example exhibiting several important features we shall define more formally in the next subsection .consider points of the -dimensional space , their coordinates being denoted .the continuous perceptron problem consists in deciding the existence of a vector which has a positive scalar product with all vectors linking the origin of to the s , or in other words determining whether the points belong to the same half - space . the term continuous in the name of the problem emphasizes the domain of the variable makes the problem polynomial from worst - case complexity point of view .suppose now that the points are chosen independently , identically , uniformly on the unit hypersphere , and call this quantity can be computed exactly ( see also chapter 5.7 of ) and is plotted in fig .[ fig - proba ] as a function of the ratio for increasing sizes .obviously is a decreasing function of the number of points for a given size : increasing the number of constraints can only make more difficult the simultaneous satisfaction of all of them .more surprisingly , the figure suggests that , in the large size limit , the probability reaches a limiting value 0 or 1 depending on whether the ratio lies , respectively , above or below some ` critical ' value .this is confirmed by the analytical expression of obtained in , from which one can easily show that , indeed , actually the analytical expression of allows to describe more accurately the drop in the probability as increases .to this aim we make a zoom on the transition region and find from ( [ percep1 ] ) that as it should the limits gives back the coarse description of eq .( [ percep_trans ] ) that random points on the -dimensional unit hypersphere are located in the same half - space .symbols correspond to cover s exact result , see eq .( [ percep1 ] ) , lines serve as guides to the eye.,width=264 ] we now put this simple example in a broader perspective and introduce some generic concepts that it illustrates , along with the definitions of the problems studied in the following. * constraint satisfaction problem ( csp ) + a csp is a decision problem where an assignment ( or configuration ) of variables is required to simultaneously satisfy constraints . in the continuous perceptronthe domain of is and the constraints impose the positivity of the scalar products ( [ question ] ) .the instance of the csp , also called formula in the following , is said satisfiable if there exists a solution ( an assignment of fulfilling all the constraints ) .the problem is a boolean csp ( ) where each constraint ( clause ) is the disjunction ( logical or ) of literals ( a variable or its negation ) .similarly in the literals are combined by an exclusive or operation , or equivalently an addition modulo 2 of boolean variables is required to take a given value .the worst - case complexities of these two problems are very different ( is in the p complexity class for any while is np - complete for any ) , yet for the issues of this review we shall see that they present a lot of similarities . in the following we use the statistical mechanics convention and represent boolean variables by ising spins , .a clause will be defined by indices ] .a clause is satisfied if the product of the spins is equal to a fixed value , .* random constraint satisfaction problem ( rcsp ) + the set of instances of most csp can be turned in a probabilistic space by defining a distribution over its constraints , as was done in the perceptron case by drawing the vertices uniformly on the hypersphere .the random formulas considered in the following are obtained by choosing for each clause independently a -uplet of distinct indices uniformly over the possible ones , and negating or not the corresponding literals ( ) with equal probability one - half .the indices of random formulas are chosen similarly , with the constant uniformly .* thermodynamic limit and phase transitions + these two terms are the physics jargon for , respectively , the large size limit ( ) and for threshold phenomena as stated for instance in ( [ percep_trans ] ) . in the thermodynamic limitthe typical behavior of physical systems is controlled by a small number of parameters , for instance the temperature and pressure of a gas . at a phase transition these systemsare drastically altered by a tiny change of a control parameter , think for instance at what happens to water when its temperature crosses .this critical value of the temperature separates two qualitatively distinct phases , liquid and gaseous . for randomcsps the role of control parameter is usually played by the ratio of constraints per variable , , kept constant in the thermodynamic limit .( [ percep_trans ] ) describes a satisfiability transition for the continuous perceptron , the critical value separating a satisfiable phase at low where instances typically have solutions to a phase where they typically do not .typically is used here as a synonym for with high probability , i.e. with a probability which goes to one in the thermodynamic limit . * finite size scaling ( fss ) + the refined description of the neighborhood of the critical value of provided by ( [ percep_fss ] ) is known as a finite size scaling relation . more generally the finite size scaling hypothesis for a threshold phenomenon takes the form where is called the fss exponent ( for the continuous perceptron ) and the scaling function has limits and at respectively and .this means that , for a large but finite size , the transition window for the values of where the probability drops from down to is , for arbitrary small , of width .results of this flavour are familiar in the study of random graphs ; for instance the appearance of a giant component containing a finite fraction of the vertices of an erds - rnyi random graph happens on a window of width on the average connectivity .fss relations are important , not only from the theoretical point of view , but also for practical applications .indeed numerical experiments are always performed on finite - size instances while theoretical predictions on phase transitions are usually true in the limit .finite - size scaling relations help to bridge the gap between the two .we shall review some fss results in sec .[ sec_review_fss ] .let us emphasize that random , and other random csp , are expected to share some features of the continuous perceptron model , for instance the existence of a satisfiability threshold , but of course not its extreme analytical simplicity .in fact , despite an intensive research activity , the mere existence of a satisfiability threshold for random formulas remains a ( widely accepted ) conjecture .a significant achievement towards the resolution of the conjecture was the proof by friedgut of the existence of a non - uniform sharp threshold .there exists also upper and lower bounds on the possible location of this putative threshold , which become almost tight for large values of .we refer the reader to the chapter of this volume for more details on these issues .this difficulty to obtain tight results with the currently available rigorous techniques is a motivation for the use of heuristic statistical mechanics methods , that provide intuitions on why the standard mathematical ones run into trouble and how to amend them . in the recent years important results first conjectured by physicists were indeed rigorously proven . before describing in some generality the statistical mechanics approach , it is instructive to study a simple variation of the perceptron model for which the basic probabilistic techniques become inefficient .the binary perceptron problem consists in looking for solutions of ( [ question ] ) on the hypercube i.e. the domain of the variable is instead of .this decision problem is np - complete .unfortunately cover s calculation can not be extended to this case , though it is natural to expect a similar satisfiability threshold phenomenon at an a priori distinct value .let us first try to study this point with basic probabilistic tools , namely the first and second moment method .the former is an application of the markov inequality , \le \e[z ] \ , \label{eq_markov_inequality}\ ] ] valid for positive integer valued random variables . we shall use it taking for the number of solutions of ( [ question ] ) , where if , if .the expectation value of the number of solutions is easily computed , = 2^n \times 2^{-m } = e^{n\ , g_1 } \quad \mbox{with } \quad g_1 = ( 1-\alpha ) \ln 2 \ , \ ] ] and vanishes when if .hence , from markov s inequality ( [ eq_markov_inequality ] ) , with high probability constraints ( [ question ] ) have no solution on the hypercube when the ratio exceeds unity : if the threshold exists , it must satisfy the bound .one can look for a lower bound to using the second moment method , relying on the inequality ^ 2}{\e[z^2 ] } \le { \rm prob}[z > 0 ] \ . \label{eq_inequality_2}\ ] ] the expectation value of the squared number of solutions reads = \sum _ { \us,\us ' } \left ( \e [ \theta ( \us \cdot \ut ) \;\theta ( \us ' \cdot \ut ) ] \right)^m\ ] ] since the vertices are chosen independently of each other .the expectation value on the right hand side of the above expression is simply the probability that the vector pointing to a randomly chosen vertex , , has positive scalar product with both vectors .elementary geometrical considerations reveal that = \frac 1{2\pi } \left ( \pi - \varphi ( \us,\us ' ) \right)\ ] ] where is the relative angle between the two vectors .this angle can be alternatively parametrized by the overlap between and , i.e. the normalized scalar product , the last expression , in which denotes the indicator function of the event , reveals the traduction between the concept of overlap and the more traditional hamming distance .the sum over vectors in ( [ mom2 ] ) can then be replaced by a sum over overlap values with appropriate combinatorial coefficients counting the number of pairs of vectors at a given overlap .the outcome is = 2^n\sum _ { q=-1,-1+\frac 2n , -1+\frac 4n , \ldots , 1 } \binom{n}{n\left(\frac{1+q}2\right ) } \ \left(\frac 12 - \frac 1 { 2\pi } \;\mbox{arcos } \ ; q \right)^m \ .\ ] ] in the large limit we can estimate this sum with the laplace method , = \max _ { -1 <q < 1 } g_2(q ) \ , \ ] ] where two conclusions can be drawn from the above calculation : * no useful lower bound to can be obtained from such a direct application of the second moment method .indeed , maximization of ( [ defg ] ) over shows that \gg ( \e[z])^2 ] , with a negative rate function , assumed for simplicity to be concave .then the moments of are given , at the leading exponential order , by = \max_s [ l(s ) +n s ] \ , \ ] ] and are controlled by the values of such that .the moments of larger and larger order are thus dominated by the contribution of rarer and rarer instances with larger and larger numbers of solutions . on the contrarythe typical value of the number of solutions is given by the maximum of , reached in a value we denote : with high probability when , is comprised between and , for any . from this reasoningit appears that the relevant quantity to be computed is = \lim _ { n\to\infty } \lim_{n\to 0 } \frac 1{n\,n } \ln \e [ z^n ] \ .\ ] ] this idea of computing moments of vanishing order is known in statistical mechanics as the replica copies of the vector in the calculation of ( see the case in formula ( [ mom2 ] ) ) . ]method .its non - rigorous implementation consists in determining the moments of integer order , which are then continued towards .the outcome of such a computation for the binary perceptron problem reads \bigg\ } \ , \nonumber\end{aligned}\ ] ] where .the entropy is a decreasing function of , which vanishes in .numerical experiments support this value for the critical ratio of the satisfiable / unsatisfiable phase transition . *the calculation of the second moment is naturally related to the determination of the value of the overlap between pairs of solutions ( or equivalently their hamming distance , recall eq .( [ eq_traduc_hamm ] ) ) .this conclusion extends to the calculation of the moment for any integer , and to the limit .the value of maximizing the r.h.s . of ( [ solution_sg ] ) , , represents the average overlap between two solutions of the same set of constraints ( [ question ] ) .actually the distribution of overlaps is highly concentrated in the large limit around , in other words the ( reduced ) hamming distance between two solutions is , with high probability , equal to .this distance ranges from for to at . slightly below the critical ratio solutionsare still far away from each other on the hypercube reaches one when tends to 2 : a single solution is left right at the critical ratio . ] .note that the perceptron problem is not as far as it could seem from the main subject of this review .there exists indeed a natural mapping between the binary perceptron problem and .assume the vertices of the perceptron problem , instead of being drawn on the hypersphere , have coordinates that can take three values : .consider now a formula .to each clause of we associate the vertex with coordinates if variable appears in clause , otherwise .of course : exactly coordinates have non zero values for each vertex .then replace condition ( [ question ] ) with the scalar product is not required to be positive any longer , but to be larger than .it is an easy check that the perceptron problem admits a solution on the hypercube ( ) if and only if is satisfiable . while in the binary perceptron modelall coordinates are non - vanishing , only a finite number of them take non zero values in .for this reason is called a diluted model in statistical physics . also the direct application of the second moment methodfails for the random problem ; yet a refined version of it was used in , which leads to asymptotically ( at large ) tight bounds on the location of the satisfiability threshold .the binary perceptron example taught us that the number of solutions of a satisfiable random csp usually scales exponentially with the size of the problem , with large fluctuations that prevent the direct use of standard moment methods .this led us to the introduction of the quenched entropy , as defined in ( [ defsg0 ] ) .the computation techniques used to obtain ( [ solution_sg ] ) were in fact developed in an apparently different field , the statistical mechanics of disordered systems .let us review some basic concepts of statistical mechanics ( for introductory books see for example ) .a physical system can be modeled by a space of configuration , on which is defined an energy function . for instanceusual magnets are described by ising spins , the energy being minimized when adjacent spins take the same value .the equilibrium properties of a physical system at temperature are given by the gibbs - boltzmann probability measure on , \ , \label{gibbs}\ ] ] where the inverse temperature equals and is a normalization called partition function .the energy function has a natural scaling , linear in the number of variables ( such a quantity is said to be extensive ) . in consequence in the thermodynamic limit the gibbs - boltzmann measure concentrates on configurations with a given energy density ( ) , which depends on the conjugated parameter .the number of such configurations is usually exponentially large , ] of the variables do not appear in any clause , which leads to a trivial lower bound ] , being the threshold value for the clustering transition .clusters are meant as a partition of the set of solutions having certain properties listed below .each cluster contains an exponential number of solutions , ] .the total entropy density thus decomposes into the sum of , the internal entropy of the clusters and , encoding the degeneracy of these clusters , usually termed complexity in this context .furthermore , solutions inside a given cluster should be well - connected , while two solutions of distinct clusters are well - separated .a possible definition for these notions is the following .suppose and are two solutions of a given cluster. then one can construct a path where any two successive are separated by a sub - extensive hamming distance . on the contrarysuch a path does not exist if and belong to two distinct clusters .clustered configuration spaces as described above have been often encountered in various contexts , e.g. neural networks and mean - field spin glasses .a vast body of involved , yet non - rigorous , analytical techniques have been developed in the field of statistical mechanics of disordered systems to tackle such situations , some of them having been justified rigorously . in this literature clusters appear under the name of `` pure states '' , or `` lumps '' ( see for instance the chapter 6 of for a rigorous definition and proof of existence in a related model ) .as we shall explain in a few lines , this clustering phenomenon has been demonstrated rigorously in the case of random instances . for random instances , where in fact the detailed picture of the satisfiable phase is thought to be richer , there are some rigorous results on the existence of clusters for large enough .consider an instance of the problem , i.e. a list of linear equations each involving out of boolean variables , where the additions are computed modulo 2 .the study performed in provides a detailed picture of the clustering and satisfiability transition sketched above .a crucial point is the construction of a core subformula according to the following algorithm .let us denote the initial set of equations , and the set of variables which appear in at least one equation of .a sequence is constructed recursively : if there are no variables in which appear in exactly one equation of the algorithm stops .otherwise one of these `` leaf variables '' is chosen arbitrarily , is constructed from by removing the unique equation in which appeared , and is defined as the set of variables which appear at least once in .let us call the number of steps performed before the algorithm stops , and , the remaining clauses and variables .note first that despite the arbitrariness in the choice of the removed leaves , the output subformula is unambiguously determined by .indeed , can be defined as the maximal ( in the inclusion sense ) subformula in which all present variables have a minimal occurrence number of 2 , and is thus unique . in graph theoretic terminology is the 2-core of , the -core of hypergraphs being a generalization of the more familiar notion on graphs , thoroughly studied in random graph ensembles in . extending this study , relying on the approximability of this leaf removal process by differential equations , it was shown in there is a threshold phenomenon at .for the 2-core is , with high probability , empty , whereas it contains a finite fraction of the variables and equations for . is easily determined numerically : it is the smallest value of such that the equation ] .it turns out that is satisfiable if and only if is , and that the number of solutions of these two formulas are related in an enlightening way .it is clear that if the 2-core has no solution , there is no way to find one for the full formula .suppose on the contrary that an assignment of the variables in that satisfy the equations of has been found , and let us show how to construct a solution of ( and count in how many possible ways we can do this ) .set , and reintroduce step by step the removed equations , starting from the last : in the step of this new procedure we reintroduce the clause which was removed at step of the leaf removal .this reintroduced clause has leaves ; their configuration can be chosen in ways to satisfy the reintroduced clause , irrespectively of the previous choices , and we bookkeep this number of possible extensions by setting . finally the total number of solutions of compatible with the choice of the solution of is obtained by adding the freedom of the variables which appeared in no equations of , .let us underline that is independent of the initial satisfying assignment of the variables in , as appears clearly from the description of the reconstruction algorithm ; this property can be traced back to the linear algebra structure of the problem .this suggests naturally the decomposition of the total number of solutions of as the product of the number of satisfying assignments of , call it , by the number of compatible full solutions . in terms of the associated entropy densitiesthis decomposition is additive where the quantity is the entropy density associated to the core of the formula .it is in fact much easier technically to compute the statistical ( with respect to the choice of the random formula ) properties of and once this decomposition has been done ( the fluctuations in the number of solutions is much smaller once the non - core part of the formula has been removed ) .the outcome of the computations is the determination of the threshold value for the appearance of a solution of the 2-core ( and thus of the complete formula ) , along with explicit formulas for the typical values of and .these two quantities are plotted on fig .[ fig_entropy_xorsat ] .the satisfiability threshold corresponds to the cancellation of : the number of solutions of the core vanishes continuously at , while the total entropy remains finite because of the freedom of choice for the variables in the non - core part of the formula . , in units of .the inset presents an enlargement of the regime ] .drawing the consequences of these observations , a refined picture of the satisfiable phase , and in particular the existence of a new ( so - called condensation ) threshold ] , the total entropy density is given by } \ .\ ] ] in the thermodynamic limit the integral can be evaluated with the laplace method .two qualitatively distinct situations can arise , whether the integral is dominated by a critical point in the interior of the interval ] where the relevant clusters are exponentially numerous , from the second , condensated situation for ] .the complexity function can thus be obtained from by an inverse legendre transform . for generic values of approach is computationally very demanding ; following the same steps as in the replica symmetric version of the cavity method one faces a distribution ( with respect to the topology of the factor graph ) of distributions ( with respect to the choice of the clusters ) of messages .simplifications however arise for and ; the latter case corresponds in fact to the original survey propagation approach of . as appears clearly in eq .( [ eq_phi ] ) , for this value of all clusters are treated on an equal footing and the dominant contribution comes from the most numerous clusters , independently of their sizes .moreover , as we further explain in sec .[ sec_mp ] , the structure of the equations can be greatly simplified in this case , the distribution over the cluster of fields being parametrized by a single number . ).,width=453 ] as we explained in sec .[ sec_gendef ] the threshold phenomenon can be more precisely described by finite size scaling relations .let us mention some fss results about the transitions we just discussed . for random 2- , where the satisfiability property is known to exhibit a sharp threshold at , the width of the transition window has been determined in .the range of where the probability of satisfaction drops significantly is of order , i.e. the exponent is equal to , as for the random graph percolation .this similarity is not surprising , the proof of relies indeed on a mapping of 2- formulas onto random ( directed ) graphs . the clustering transition for was first conjectured in ( in the related context of error - correcting codes ) then proved in to be described by where is a subleading shift correction that has been explicitly computed , and the scaling function is , upto a multiplicative factor on , the same error function as in eq .( [ percep_fss ] ) .a general result has been proved in on the width of transition windows . under rather unrestrictive conditionsone can show that : the transitions can not be arbitrarily sharp . roughly speaking the bound is valid when a finite fraction of the clauses are not decisive for the property of the formulas studied , for instance clauses containing a leaf variable are not relevant for the satisfiability of a formula .the number of these irrelevant clauses is of order and has thus natural fluctuations of order ; these fluctuations blur the transition window which can not be sharper than .several studies ( see for instance ) have attempted to determine the transition window from numeric evaluations of the probability , for instance for the satisfiability threshold of random 3- and .these studies are necessarily confined to small formula sizes , as the typical computation cost of complete algorithms grows exponentially around the transition . in consequencethe asymptotic regime of the transition window , , is often hidden by subleading corrections which are difficult to evaluate , and in the reported values of were found to be in contradiction with the latter derived rigorous bound .this is not an isolated case , numerical studies are often plagued by uncontrolled finite - size effects , as for instance in the bootstrap percolation , a variation of the classical percolation problem .the following of this review will be devoted to the study of various solving algorithms for formulas .algorithms are , to some extent , similar to dynamical processes studied in statistical physics . in this contextthe focus is however mainly on stochastic processes that respect detailed balance with respect to the gibbs - boltzmann measure , a condition which is rarely respected by solving algorithms .physics inspired techniques can yet be useful , and will emerge in three different ways .the random walk algorithms considered in this section are stochastic processes in the space of configurations ( not fulfilling the detailed balance condition ) , moving by small steps where one or a few variables are modified . out - of - equilibrium physics ( and in particular growth processes ) provide an interesting view on classical complete algorithms ( dpll ) , as shown in sec .[ sec_dpll ] . finally , the picture of the satisfiable phase put forward in sec .[ sec : phase_transitions ] underlies the message - passing procedures discussed in sec .[ sec_mp ] .papadimitriou proposed the following algorithm , called pure random walk sat ( prw ) in the following , to solve formulas : 1 .choose an initial assignment uniformly at random and set .2 . if is a solution of the formula ( i.e. ) , output solution and stop .if , a threshold fixed beforehand , output undetermined and stop .3 . otherwise , pick uniformly at random a clause among those that are in ; pick uniformly at random one of the variables of this clause and flip it ( reverse its status from true to false and vice - versa ) to define the next assignment ; set and go back to step 2 .this defines a stochastic process , a biased random walk in the space of configurations .the modification in step 3 makes the selected clause satisfied ; however the flip of a variable can turn previously satisfied clauses into unsatisfied ones ( those which were satisfied solely by in ) .this algorithm is not complete : if it outputs a solution one is certain that the formula was satisfiable ( and the current configuration provides a certificate of it ) , but if no solution has been found within the allowed steps one can not be sure that the formula was unsatisfiable .there are however two rigorous results which makes it a probabilistically almost complete algorithm . for , it was shown in that prw finds a solution in a time of order with high probability for all satisfiable instances .hence , one is almost certain that the formula was unsatisfiable if the output of the algorithm is undetermined after steps . schning proposed the following variation for . if the algorithm fails to find a solution before steps , instead of stopping and printing undetermined , it restarts from step 1 , with a new random initial condition .schning proved that if after restarts no solution has been found , then the probability that the instance is satisfiable is upper - bounded by ] but are not found by this heuristic .the above algorithm _ modifies _ the formula as it proceeds ; during the execution of the algorithm the current formula will contain clauses of length 2 and 3 ( we specialize here to - for the sake of simplicity but higher values of can be considered ) .the sub - formulas generated by the search procedure maintain their statistical uniformity ( conditioned on the number of clauses of length 2 and 3 ) .franco and collaborators used this fact to write down differential equations for the evolution of the densities of 2- and 3-clauses as a function of the fraction of eliminated variables .we do not reproduce those equations here , see for a pedagogical review .based on this analysis frieze and suen were able to calculate , in the limit of infinite size , the probability of successful search .the outcome for the uc heuristic is - \frac 3 { 16 } \alpha \right\ } \label{p_success_uc}\end{aligned}\ ] ] when , and for larger ratios .the probability is , as expected , a decreasing function of ; it vanishes in .a similar calculation shows that for the guc heuristic .franco et al s analysis can be recast in the following terms . under the operation of the algorithmthe original 3- formula is turned into a mixed - formula where denotes the fraction of the clauses with 3 variables : there are 2-clauses and 3-clauses .as we mentioned earlier the simplicity of the heuristics maintains a statistical uniformity over the formulas with a given value of and .this constatation motivated the study of the random - ensemble by statistical mechanics methods , some of the results being later confirmed by the rigorous analysis of . at the heuristic level oneexpects the existence of a dependent satisfiability threshold , interpolating between the 2- known threshold , , and the conjectured 3- case , .the upperbound is easily obtained : for the mixed formula to be satisfiable , necessarily the sub - formula obtained by retaining only the clauses of length 2 must be satisfiable as well .in fact this bound is tight for all values of ] , the probability that . similarly the survey is the probability that .the second part of ( [ eq_wp ] ) is readily translated in probabilistic terms , the other part of the recursion takes a slightly more complicated form , in this equation ( resp . ) corresponds to the probability that none of the clauses agreeing ( resp . disagreeing ) with on the value ofthe literal of sends a warning . for to be constrained to the value unsatisfying , at least one of the clauses of should send a warning , and none of , which explains the form of the numerator of .the denominator arises from the exclusion of the event that both clauses in and send messages , a contradictory event in this version of sp which is devised for satisfiable formulas .+ from the statistical mechanics point of view the sp equations arise from a 1rsb cavity calculation , as sketched in sec .[ sec_computations ] , in the zero temperature limit ( ) and vanishing parisi parameter , these two limits being either taken simultaneously as in or successively .one can thus compute , from the solution of the recursive equations on a single formula , an estimation of its complexity , i.e. the number of its clusters ( irrespectively of their sizes ) .the message passing procedure can also be adapted , at the price of technical complications , to unsatisfiable clustered formulas .note also that the above sp equations have been shown to correspond to the bp ones in an extended configuration space where variables can take a `` joker '' value , mimicking the variables which are not frozen to a single value in all the assignments of a given cluster .heuristic interpolations between the bp and sp equations have been studied in .the information provided by these message passing procedures can be exploited in order to solve satisfiability formulas ; in the algorithm sketched at the beginning of sec .[ secuc ] the heuristic choice of the assigned variable , and its truth value , can be done according to the results of the message passing on the current formula .if bp were an exact inference algorithm , one could choose any unassigned variable , compute its marginal according to eq .( [ eq_mui ] ) , and draw it according to this probability .of course bp is only an approximate procedure , hence a practical implementation of this idea should privilege the variables with marginal probabilities closest to a deterministic law ( i.e. with the largest ) , motivated by the intuition that these are the least subject to the approximation errors of bp . similarly ,if the message passing procedure used at each assignment step is wp , one can fix the variable with the largest to the value corresponding to the sign of . in the case of sp , the solution of the message passing equationsare used to compute , for each unassigned variable , a triplet of numbers according to ( resp . ) is interpreted as the fraction of clusters in which ( resp . ) in all solutions of the cluster , hence corresponds to the clusters in which can take both values . in the version of , one then choose the variable with the largest , and fix it to ( resp . ) if ( resp . ) . in this wayone tries to select an assignment preserving the maximal number of clusters .of course many variants of these heuristic rules can be devised ; for instance after each message passing computation one can fix a finite fraction of the variables ( instead of a single one ) , allows for some amount of backtracking , or increase a soft bias instead of assigning completely a variable .moreover the tolerance on the level of convergence of the message passing itself can also be adjusted .all these implementation choices will affect the performances of the solver , in particular the maximal value of up to which random instances are solved efficiently , and thus makes difficult a precise statement about the limits of these algorithms . in consequencewe shall only report the impressive result of , which presents an implementation working for random 3- instances up to ( very close to the conjectured satisfiability threshold ) for problem sizes as large as .the theoretical understanding of these message passing inspired solvers is still poor compared to the algorithms studied in sec .[ secuc ] , which use much simpler heuristics in their assignment steps .one difficulty is the description of the residual formula after an extensive number of variables have been assigned ; because of the correlations between successive steps of the algorithm this residual formula is not uniformly distributed conditioned on a few dynamical parameters , as was the case with for the simpler heuristics of sec .[ secuc ] .one version of bp guided decimation could however be studied analytically in , by means of an analysis of the thought experiment discussed at the beginning of sec .[ sec_decimation ] .the study of another simple message passing algorithm is presented in the next paragraph .feige proved in a remarkable connection between the _ worst - case _ complexity of approximation problems and the structure of _ random _ 3- at large ( but independent of ) values of the ratio .he introduced the following hardness hypothesis for random 3- formulas : hypothesis 1 : _ even if is arbitrarily large ( but independent of ) , there is no polynomial time algorithm that on most 3-sat formulas outputs unsat , and always outputs sat on a 3-sat formula that is satisfiable_. and used it to derive hardness of approximation results for various computational problems . as we have seen these instancesare typically unsatisfiable ; the problem of interest is thus to recognize efficiently the rare satisfiable instances of the distribution .a variant of this problem was studied in , where wp was proven to be effective in finding solutions of dense planted random formulas ( the planted distribution is the uniform distribution conditioned on being satisfied by a given assignment ) .more precisely , proves that for large enough ( but independent of ) , the following holds with probability : 1 .wp converges after at most iterations .2 . if a variable has , then the sign of is equal to the value of in the planted assignment .the number of such variables is bigger than ( i.e. almost all variables can be reconstructed from the values of ) .3 . once these variables are fixed to their correct assignments , the remaining formula can be satisfied in time ( in fact , it is a tree formula ) .on the basis of non - rigorous statistical mechanics methods , these results were argued in to remain true when the planted distribution is replaced by the uniform distribution conditioned on being satisfiable . in other words by iterating wp for a number of iterations bigger than one is able to detect the rare satisfiable instances at large .the argument is based on the similarity of structure between the two distributions at large , namely the existence of a single , small cluster of solutions where almost all variables are frozen to a given value .this correspondence between the two distributions of instances was proven rigorously in , where it was also shown that a related polynomial algorithm succeeds with high probability in finding solutions of the satisfiable distribution of large enough density .these results indicate that a stronger form of hypothesis 1 , obtained by replacing _ always _ with _ with probability _ ( with respect to the uniform distribution over the formulas and possibly to some randomness built in the algorithm ) , is wrong for any .however , the validity of hypothesis 1 is still unknown for random 3- instances .nevertheless , this result is interesting because it is one of the rare cases in which the performances of a message - passing algorithm could be analyzed in full detail .this review was mainly dedicated to the random -satisfiability and -xor - satisfiability problems ; the approach and results we presented however extend to other random decision problems , in particular random graph -coloring .this problem consists in deciding whether each vertex of a graph can be assigned one out of possible colors , without giving the same color to the two extremities of an edge . when input graphs are randomly drawn from erds - renyi ( er ) ensemble a phase diagram similar to the one of -sat ( section [ sec : phase_transitions ] ) is obtained .there exists a colorable / uncolorable phase transition for some critical average degree , with for instance .the colorable phase also exhibits the clustering and condensation transitions we explained on the example of the -satisfiability .actually what seems to matter here is rather the structure of inputs and the symmetry properties of the decision problem rather than its specific details .all the above considered input models share a common , underlying er random graph structure . from this point of viewit would be interesting to ` escape ' from the er ensemble and consider more structured graphs e.g. embedded in a low dimensional space .to what extent the similarity between phase diagrams correspond to similar behaviour in terms of hardness of resolution is an open question .consider the case of rare satisfiable instances for the random -sat and -xorsat well above their sat / unsat thresholds ( section [ sec_decimation ] ) .both problems share very similar statistical features .however , while a simple message - passing algorithm allows one to easily find a ( the ) solution for the -sat problem this algorithm is inefficient for random -xorsat .actually the local or decimation - based algorithms of sections [ sec_localsearch ] and [ sec_decimation ] are efficient to find solution to rare satisfable instances of random -sat , but none of them works for random -xorsat ( while the problem is in p ! ) .this example raises the important question of the relationship between the statistical properties of solutions ( or quasi - solutions ) encoded in the phase diagram and the ( average ) computational hardness .very little is known about this crucial point ; on intuitive grounds one could expect the clustering phenomenon to prevent an efficient solving of formulas by local search algorithms of the random walk type .this is indeed true for a particular class of stochastic processes , those which respect the so - called detailed balance conditions .this connection between clustering and hardness of resolution for local search algorithms is much less obvious when the detailed balance conditions are not respected , which is the case for most of the efficient variants of prwsat .
|
we review the connection between statistical mechanics and the analysis of random optimization problems , with particular emphasis on the random -sat problem . we discuss and characterize the different phase transitions that are met in these problems , starting from basic concepts . we also discuss how statistical mechanics methods can be used to investigate the behavior of local search and decimation based algorithms . + _ this paper has been written as a contribution to the `` handbook of satisfiability '' to be published in 2008 by ios press . _
|
use of fossil fuels for satisfaction of current energy needs has an inherent waste product carbon dioxide . since the beginning of the technological revolution the amount of co released in the atmosphere has grown monotonically , causing a substantial increase of its concentration . within the last decade ,a significant effort has been expended on identifying ways to avoid co release in the atmosphere , which is the domain of co sequestration .various geological formations are considered as options for long - term storage of co : depleted oil reservoirs , unmineable coal seams , deep saline aquifers , etc .the latter are especially promising because they are widespread and have high capacity .deep aquifers are separated from the shallow freshwater aquifers by caprock a formation with extremely low permeability ( often shale ) . when co is injected into an aquifer ,the integrity of the caprock prevents co leakage . however , buildup of the fluid pressure caused by co injection changes the stresses in the caprock , and can lead to reactivation of preexisting faults or even fracturing of the caprock .recent studies have shown that when co is injected at a temperature lower than the ambient temperature of the formation , additional thermal stresses develop around the injection well and the risk of fracturing increases , , so that even the caprock can be fractured , , . in our recent work we revealed two regions of high tensile stresses , where fracturing may occur : ( 1 ) in the immediate vicinity of the injection well , and ( 2 ) above the injection well in the caprock at the boundary with the aquifer .the first can lead to horizontal fractures in the aquifer , which are of no concern ( and are even beneficial , since they can increase injectivity ) .the second can lead to short vertical fractures in the caprock .however , fracturing does not necessarily lead to leakage ; co will leak out of the aquifer only if the fractures are long enough to reach an abandoned well or connect to a network of natural fractures .the initial length of the fractures can be small , e.g. of the order of 10 cm to 10 m , but under high fluid pressure the fractures may propagate .therefore , the rate of fracture propagation and the characteristic length of fractures are crucial for assessing the possibility of co leakage from a deep aquifer .fluid - driven fracture propagation involves multiple physical processes : fracture mechanics , flow in the fracture and flow in the porous aquifer . however , when an aquifer has low permeability , the fluid outflow from it is slow and therefore it is the rate - limiting process for fracture propagation .therefore , in order to predict the rate of fracture propagation , one has to calculate the outflow ( discharge ) from the aquifer , which can be found from the solution of the pressure equation .the evolution of pressure takes place in two regions : the aquifer and the propagating fracture .however , we show here that when the permeability of the aquifer is significantly lower than the permeability in the fracture then it can be assumed that the pressure in the fracture is established instantaneously. this assumption is used in the hydraulic fracture literature .therefore , the pressure diffusion problem can be considered only in the aquifer .even when considering the pressure diffusion only in the aquifer , the problem is non - trivial , since it is an unsteady problem in a two - dimensional ( 2d ) domain .we solve the 2d problem numerically and find that after a relatively short time , the solution for the flux is equal to twice the solution of a simplified one - dimensional ( 1d ) problem from the two horizontal flow paths toward the fracture .the 1d problem can be solved analytically .the analytical solution for the pressure diffusion problem provides an expression for the fluid flux into the fracture .then , assuming the khristianovich - geertsma - de klerk ( kgd ) geometry for the fracture and using the relations for the fracture aperture from , from the calculated flux we can obtain the fracture length and aperture as a function of time .using our analytical solution we make estimates based on the parameters for the krechba aquifer ( in salah , algeria ) from , .this site is of significant technological interest because it has been used as a pilot project for co injection since 2004 .we find that initially the fracture propagation is very fast , similar to the rate of propagation of hydraulic fractures .our analytical solution predicts fracture propagation of meters within less then a minute after initiation .on such length scales a fracture may easily reach a leaky fault , a system of natural fractures or an abandoned well and become a pathway for co leakage from the aquifer into potable aquifers or even into the atmosphere .we also show that the hydrostatic and geostatic effects cause the increase of the driving force for the fracture propagation and , therefore , our solution serves as an estimate from below .we consider the physical system to consist of a porous aquifer filled with fluid ( brine and injected supercritical carbon dioxide ) and the caprock ( shale ) that constrains the aquifer from above .we assume that the aquifer has relatively low permeability ( md ) , which is the case , for example , for the sandstone aquifer at the krechba field ( in salah , algeria ) .injection of cold co leads to a pressure buildup in the aquifer and to tensile stresses in the caprock .our recent simulations showed that after several years of continuous injection of cold co the stresses in the caprock above the horizontal injection well exceed the tensile strength of the caprock .therefore , the caprock fractures .here we do not discuss the evolution of stresses and initiation of the fracture , since that has been done in ref .rather , we consider a single vertical 2d fracture originating at the boundary between the 2d aquifer and the caprock , and we assume that the fracture has an elliptical kgd geometry , , ; a schematic of the system is represented in figure [ fig : scheme ] .high pressure in the aquifer pushes the fluid into the fracture , which may cause it to propagate further .there are several physical mechanisms controlling the behavior of a fluid - driven fracture . for a typical well - driven hydraulic fracturing operation ,the injected flow rate is high and the fracture propagation rate is limited by two dissipative processes : fracturing of the rock ( controlled by the rock toughness ) and dissipation in the fluid ( controlled by fluid viscosity ) . however , the case considered here differs substantially .the source of fluid is the aquifer , which has low permeability , and therefore the outflow of fluid from it is relatively slow .the rate of fracture propagation can not be faster than the flow of fluid that causes this propagation . since the fluid outflow from the aquifer is the rate - limiting process for the fracture propagation, it is the only process considered below . if a fluid - driven fracture propagates in a permeable media , the fluid may seep into the rock through the walls of the fracture .when the permeability of the rock is high , this effect may noticeably affect the rate of propagation , but in our case a fracture propagates in shale with typical permeabilities of the order of md , so the leak - off effects can be neglected .we denote the fracture length and the aperture ( maximum width ) , which are both functions of time when the fracture propagates .the time corresponds in our model to the initiation of the fracture , which ( according to ) may take place after several years of continuous injection of co .we assume that the initial length of the fracture is negligibly small compared to its length as it propagates .we denote by the direction into the aquifer , so that is the direction of fracture propagation and denotes the interface between aquifer and caprock . following assume that the fracture is filled with fluid , which originated in the aquifer and flowed into the fracture ; the pressure in the fracture equilibrates instantaneously .figure [ fig : scheme ] represents the system under consideration . and ) drives the fracture propagation . ]we begin with the material balance equation : the rate of change of volume of the fracture is equal to the fluid volumetric flow rate brought from the aquifer , i.e. discharge , the main goal for us is , therefore , to calculate the flux .once we know it , we can predict how the fracture volume and length change .the flux can be found from the evolution of pressure in the 2d aquifer with leakage through the opening ; is the horizontal axis , the axis is positive downwards , , correspond to the center of the fracture opening , and the axis is perpendicular to the plane of the image in figure [ fig : scheme ] .the pressure evolution is governed by a diffusion equation where is the diffusion coefficient for pressure , is the porosity of the aquifer , and is the compressibility of the fluid .characteristic values of , , and ( see table [ tab : param ] ) give m/s ..properties of in salah site for co injection [ cols= " < , < " , ] [ tab : param ] since the pressure in the higher permeability fracture is established very fast , we will assume it to be constant along the fracture and equal to the confining stress in the caprock .therefore , when considering the pressure diffusion problem , the fracture will be represented as a boundary condition for the pressure another boundary condition is where is the initial pressure in the aquifer , .we note that is noticeably higher than the fluid pressure value before the co injection . within these injection yearshigh pressure propagates from the injection well in the aquifer , and we assume that far from the fracture the pressure remains constant .then we assume no flux outside the aquifer , except for in the fracture and where is the thickness of the aquifer ( see figure [ fig : scheme ] ) . for further considerationit is convenient to rewrite the problem in terms of dimensionless variables .we use the thickness of the aquifer as a unit of length and as a unit of time .therefore , we introduce the following dimensionless variables : then we consider the dimensionless pressure therefore , the diffusion problem can be rewritten as : [ eq : diff2d - dim ] = + + .f ( , , ) |_|| w/2h , = 0 = 0 , + .f ( , , ) |_= = 1 , + .|_|| > w/2h , = 0 = 0 , + .|_= 1 = 0 .an analytical solution of the mixed boundary - value problem [ eq : diff2d - dim ] formulated above is non - trivial and therefore we will solve it numerically . in the initial dimensional formulation we have two characteristic length scales : the fracture aperture m , and the thickness of the aquifer . hence .when solving the problem numerically one more length is introduced the length of the domain ( for the half of the problem , i.e. ) . in order to model an infinite domain , we need , which is typical of the actual physical problem .the schematic for this problem is shown in the figure [ fig:2d - full ] . at the boundaries is , the pressure at the fracture opening is , and no - flux conditionsare prescribed at the upper and lower boundaries . ] therefore , in terms of dimensionless variables we have the following strong inequalities for the characteristic lengths : the dimensionless times corresponding to these three lengths are so that below we will present the results of a numerical solution of the problem and see how it differs on different time scales . for the sake of simplicitywe take and , which will give us characteristic times and .we solve the problem numerically on a rectangular mesh with 400 100 nodes ; the mesh is refined near the fracture opening in both horizontal and vertical directions .the numerical solution is performed using dynaflow a nonlinear transient finite element analysis program .since the thickness of the aquifer is much smaller then its length , the flux in the aquifer is mostly in one horizontal direction ( except in the vicinity of the fracture ) , see figure [ fig : scheme ] . due tothe symmetry , the flux in this problem is twice the flux in the semi - infinite one - dimensional ( 1d ) problem [ eq : diff1d - dim ] = + .f ( , ) |_= 0 = 0 , + .f ( , ) |_= = 1 .the analytical solution for this 1d diffusion problem is given by ,\ ] ] where is the error function .calculating the partial derivative of the dimensionless pressure using the solution ( [ eq : f(y , t ) ] ) we obtain ,\ ] ] and , therefore , darcy s law for the 1d case gives where is the dimension of the aquifer perpendicular to the plane of figure [ fig : scheme ] , and is the aquifer cross - sectional area .therefore also , it has to be noted that .this coefficient 2 reflects that in the 2d problem the fluid is coming from both in figure [ fig : scheme ] .two series of numerical simulations were carried out , with 1000 time steps each .the time steps were made a geometric sequence with the common ratio .the first series started with the time step and finished at the time .the second series started with the time step and finished at the time .figure [ fig : stages ] shows the flux as a function of time . figure [ fig : stages ] clearly reveals four different stages of time evolution of the flux .normalized by as function of dimensionless time ( log - log scale ) .the solid line ( blue ) represents the result of numerical solution for the 2d domain . the dashed line ( red ) represents the numerical solution ( ) of the problem for the 1d domain of the length .the dash - dot line ( green ) gives the analytical solution of the non - steady 1d problem on the semi - infinite domain , see eq .( [ eq : q1d ] ) .the vertical dashed lines represent the characteristic times , and . ]stage 1 : fracture size effects for this period covers initial times when the diffusion perturbation spreads just in the vicinity of the fracture opening ( length scale ) . stage 2 : 2d ( aquifer thickness ) effects for the vertical profile of the pressure is being established and the diffusion perturbation spreads towards the bottom of the aquifer ( length scale ) .stage 3 : 2d effects vanish for the vertical profile of the pressure is established and the diffusion process is effectively 1d .we expect [ eq . ( [ eq : q1d ] ) ] ;this asymptotic behavior is clearly seen from the analytical solution of the 1d problem and agreement with the numerical simulations for the 1d problem .stage 4 : finite domain effects for this stage is a consequence of replacing a semi - infinite domain with the length . at times the steady state diffusion profile is established in the whole domain and , therefore , the flux for numerical solutions of both 1d and 2d problems .thus , we conclude that for realistically long aquifers ( kilometers ) , the solution of the 2d problem can be reasonably approximated by twice the solution of a 1d problem starting from stage 3 , i.e. at times . for the typical situation ( e.g. in salah ) m , m/s, therefore the latter strong inequality is equivalent to s , i.e. the 1d approximation works after 35 minutes . the one - dimensional solution for the flux eq .( [ eq : q1d ] ) is proportional to the thickness of the aquifer .let us see whether this is the case for the numerical solution of the 2d problem .we plot the results for 2d fluxes for three different thicknesses , plotting each curve with the corresponding time scale , i.e. for thickness with scale , for with scale and with scale .figure [ fig : hs ] shows that with such time scales the curves coincide .these results also show that for sufficiently long time the flux for the 2d problem does not depend on the size of the opening ( the fracture aperture ) .this result is in line with the analytical expression for the 1d case eq .( [ eq : q1d ] ) .normalized by as a function of time calculated from the numerical solution of the 2d problem for three different thicknesses of the aquifer .time scales for each curve corresponds to the thickness . ]the material balance eq .( [ eq : balance ] ) provides the relation between the flux ( discharge from the aquifer ) and the volume of the fracture .the volume of an elliptical fracture is given by using darcy s law [ eq . ( [ eq : darcy ] ) ] and ( [ eq : volume ] ) in eq .( [ eq : balance ] ) , we obtain : = 2 h z \frac{k}{\mu } \left .\frac{\partial p(x , t)}{\partial x } \right|_{x=0+}.\ ] ] note that cancels in this equation .we assume that within all of the time from fracture initiation to time , the pressure evolution is governed by a 1d diffusion problem .we also assume that the initial volume of the fracture at is small ; then we can rewrite eq .( [ eq : balance2 ] ) in the integral form substituting eq .( [ eq : q1d ] ) into eq .( [ eq : balance3 ] ) we get integrating eq .( [ eq : balance4 ] ) we have in order to obtain an explicit expression for we need to substitute the formula for as a function of and , derived by geertsma and de klerk ^{1/4},\ ] ] where is the shear modulus of the rock , , and and are , respectively , young s modulus and poisson s ratio of the caprock . eq .( [ eq : gdk ] ) was derived for hydraulic fracture , when the discharge is controlled by the operator . in our case discharge determined by the material balance equation and fracture parameters .therefore , substituting eq .( [ eq : q1d ] ) into eq .( [ eq : gdk ] ) , taking into account eq .( [ eq : dimensionless ] ) , we find ^{1/4}.\ ] ] substituting eq .( [ eq : gdk - time ] ) into eq .( [ eq : balance5 ] ) we arrive at the length of the fracture or finally where characteristic values for our problem are : m , gpa , pa s , m/s , mpa ( see table [ tab : param ] ) .these estimates give us m / s .substituting eqs .( [ eq : l - beta ] ) and ( [ eq : beta ] ) into eq .( [ eq : gdk - time ] ) , yields the time dependence of the fracture aperture .the evolution of the fracture length and aperture are shown in figure [ fig : evolution ] .we note that initially the fracture propagation is very fast : meters within less then a minute after initiation .however , such a rate is similar to the rate of propagation of hydraulic fractures . ) , ( [ eq : l - beta ] ) and ( [ eq : beta ] ) for the physical parameters from table [ tab : param ] .the parameters of the mixture were calculated using the average weighted with the saturations . ]although in our schematic in figure [ fig : scheme ] the fracture is vertical , the solution we derive is applicable to a fracture propagating in any direction . in the current subsectionwe consider a vertical fracture only . the driving force for fracture propagation is the difference between the fluid pressure in the fracture and the confining total horizontal stress in the caprock . in the analytical solution for the fracture length eq .( [ eq : l - beta ] ) is assumed constant , i.e. we assume that neither the confining stress nor hydrostatic pressure change with depth .both effects can be important when the fracture propagates large enough distances toward the earth surface .let us estimate how these values vary with the depth , i.e. consider and .the values of are negative for the considerations below ( fig .[ fig : scheme ] ) .the total horizontal stress in the caprock is by definition the sum of the effective horizontal stress and the water pressure in the caprock here we use the sign convention for soil mechanics : compressive stresses have positive values .the caprock is saturated with brine ( water ) and co from the aquifer does not enter it ; also the pressure in the caprock is not perturbed by the high pressure in the aquifer due to low permeability of shale .the effective horizontal stress is related to the effective vertical stress through the lateral stress coefficient the vertical effective stress can then be calculated easily : where corresponds to the caprock - aquifer boundary , and are the porosity and density of the caprock respectively , is the density of the brine ( water ) , and is the gravitational acceleration . evidently , according to eq .( [ eq : sigma_v ] ) the effective vertical stress decreases with elevation , since the possible values of are negative . the fluid pressure in the fracture at a certain height is given by where is the fluid density , calculated in accordance with the value of residual saturation ( table [ tab : param ] ) .the water pressure in the caprock also changes with the depth finally , collecting eqs .( [ eq : sigma_h ] ) ( [ eq : p_w ] ) , we obtain the dependence of the driving force on the depth g y,\ ] ] where the last term is a positive value , increasing with elevation ( due to ) . substituting the values of the physical parameters for our system , and using , calculated based on in situ stresses reported in , we obtain the increase of the driving force per 1 meter of decrease of the depth , i.e. where g = 7.1 ~ \mathrm{kpa / m}\ ] ] the driving force for the fracture propagation at the fracture tip is determined by eq .( [ eq : elevat ] ) with .therefore the analytical solution of the diffusion problem can not be readily modified to take the hydrostatic and geostatic effects into account . eq .( [ eq : elevat ] ) shows that the rate of the fracture propagation increases monotonically and our prediction for the rate of propagation is an estimate from below .we assumed that the pressure in the fracture is established instantaneously .the pressure evolution is determined by the `` diffusion coefficient '' , eq .( [ eq : c_f ] ) , which is proportional to the permeability . thus , in order to assume the diffusion in the fracture is fast compared to that in the aquifer , we need the permeability of the aquifer to be much lower than the permeability inside the fracture , i.e. when the fracture propagates , its aperture increases in time according to eq .( [ eq : gdk - time ] ) .the increase of the aperture causes the increase of permeability .however , this change makes the strong inequality ( [ eq : perm ] ) even stronger . for mdthe strong inequality ( [ eq : perm ] ) is valid when the fracture aperture m. using the parameters from table [ tab : param ] , our estimates for the initial fracture aperture based on the work of ref . give the initial fracture aperture m , so the strong inequality ( [ eq : perm ] ) is fulfilled starting already from the fracture initiation .the safety of co storage in deep saline aquifers relies on the integrity of the caprock ; fractures in the caprock may serve as pathways for co leakage if they propagate long enough . in this paperwe present a theoretical model for propagation of a fracture driven by fluid outflow from a low - permeability aquifer .since the pressure in the fracture is established very fast , the outflow is governed by the slower process pressure diffusion in the aquifer . by solving the 2d problem numerically ,we show that after a relatively short time it can be approximated by the solution of 1d diffusion problem .the latter is solved numerically and analytically . based on our solution of the diffusion problem , and the relation for the fracture geometry derived in the hydraulic fracture literature, we derive an analytical expression for the fracture propagation length as a function of time .our simple model can be used together with the results of geomechanical simulations when the explicit consideration of fracture propagation is not included .this approach provides an estimate to the rate of fracture propagation based on the results of continuum mechanics simulations , without involving laborious simulations of fracture propagation . using the geomechanical and material parameters for the aquifer at in salah, we predict the length of a hypothetical fracture propagation to be of the order of a hundred of meters within the first minute after initiation .this rate is extremely fat and is close to the typical rates of propagation of hydraulic fractures .we also estimate the depth correction to the driving force for the fracture propagation .we show that the changes of confining horizontal stress and hydrostatic pressure with elevation lead to an additional increase of the driving force for fracture propagation of the order of kpa per meter of elevation . therefore our estimate for the rate of the fracture propagation is an estimate from below . besides the in salah site , the proposed model is also applicable to a number of aquifers currently used for co storage . as such, the following sites have aquifers with low permeability ( md ) : nagaoka ( japan ) , alberta basin ( canada ) , mrcsp michigan basin ( usa ) , gorgon ( australia ) .fracturing of the caprock can still be a serious safety concern . in order to arrest the fracture ,the fluid pressure must be decreased .shutting down the injection will not have an immediate effect .after several years of the continuous injection the pressure in the reservoir is spread over several kilometers .therefore , even if injection is stopped , the aquifer will remain over - pressured for a long time .funding for this research has been provided by the carbon mitigation initiative ( http://cmi.princeton.edu ) sponsored by bp .we thank george scherer for fruitful discussions and allyson sgro for useful comments on the manuscript .we also thank the anonymous reviewers for the constructive comments that led to significant improvements in the manuscript .f. cappa , j. rutqvist , modeling of coupled deformation and permeability evolution during fault reactivation induced by deep underground injection of co , international journal of greenhouse gas control 5 ( 2 ) ( 2011 ) 336 346 .z. luo , s. l. bryant , influence of thermo - elastic stress on co injection induced fractures during storage , in : spe international conference on co capture , storage , and utilization , 10 - 12 november 2010 , new orleans , louisiana , usa , 2010 .z. luo , s. l. bryant , influence of thermoelastic stress on fracturing a horizontal injector during geological co storage , in : canadian unconventional resources conference , 15 - 17 november 2011 , alberta , canada , 2011 .m. preisig , j. h. prvost , coupled multi - phase thermo - poromechanical effects .case study : co injection at in salah , algeria , international journal of greenhouse gas control 5 ( 4 ) ( 2011 ) 1055 1064 .humez , p. audigane , j. lions , c. chiaberge , g. bellenfant , modeling of co leakage up through an abandoned well from deep saline aquifer to shallow fresh groundwaters , transport in porous media 90 ( 1 ) ( 2011 ) 153181 .j. smith , s. durucan , a. korre , j .- q .shi , carbon dioxide storage risk assessment : analysis of caprock fracture network connectivity , international journal of greenhouse gas control 5 ( 2 ) ( 2011 ) 226240 .j. rutqvist , d. w. vasco , l. myer , coupled reservoir - geomechanical analysis of co injection and ground deformations at in salah , algeria , international journal of greenhouse gas control 4 ( 2 ) ( 2010 ) 225 230 .r. h. nilson , similarity solutions for wedge - shaped hydraulic fractures driven into a permeable medium by a constant inlet pressure , international journal for numerical and analytical methods in geomechanics 12 ( 5 ) ( 1988 ) 477495 .j. h. prvost , dynaflow : a nonlinear transient finite element analysis program .department of civil and environmental engineering , princeton university , princeton , nj ( 1981 ) .( last update 2013 ) .j. p. morris , y. hao , w. foxall , w. mcnab , a study of injection - induced mechanical deformation at the in salah co storage project , international journal of greenhouse gas control 5 ( 2 ) ( 2011 ) 270 280 .
|
deep saline aquifers are promising geological reservoirs for co sequestration if they do not leak . the absence of leakage is provided by the caprock integrity . however , co injection operations may change the geomechanical stresses and cause fracturing of the caprock . we present a model for the propagation of a fracture in the caprock driven by the outflow of fluid from a low - permeability aquifer . we show that to describe the fracture propagation , it is necessary to solve the pressure diffusion problem in the aquifer . we solve the problem numerically for the two - dimensional domain and show that , after a relatively short time , the solution is close to that of one - dimensional problem , which can be solved analytically . we use the relations derived in the hydraulic fracture literature to relate the the width of the fracture to its length and the flux into it , which allows us to obtain an analytical expression for the fracture length as a function of time . using these results we predict the propagation of a hypothetical fracture at the in salah co injection site to be as fast as a typical hydraulic fracture . we also show that the hydrostatic and geostatic effects cause the increase of the driving force for the fracture propagation and , therefore , our solution serves as an estimate from below . numerical estimates show that if a fracture appears , it is likely that it will become a pathway for co leakage .
|
the main objective of superdarn radars is to study the spectral characteristics of ionospheric irregularities elongated with the earth magnetic field , that allows one to study the dynamics of ionospheric electric fields at high latitudes .it should be noted that the signal scattered by elongated irregularities can be divided into 4 main types , wherein the separation is carried out on the details of their spectral characteristics .however , the study of the dynamics of these irregularities is practically impossible without a detailed analysis of the spectral structure of the scattered signal .the main difficulty of using superdarn radars for studying the spectral properties of the scattering irregularities is the shape of the sounding signal .currently for superdarn the two types are used : standard 7-pulse and 8-pulse katscan sequences .the use of these sequences for the detailed study of the spectral characteristics is almost impossible , since the spectral resolution provided by these sequences is too poor .it is equivalent to the 100 - 200 m / s for the speed . at the moment, the problem of low spectral resolution on superdarn radars is solved by complex techniques that allow one to estimate the doppler drift and the average spectral width of the signal without obtaining the spectrum , but based on the analysis of the correlation function and its phase structure , as well as on the basis of model assumptions .the fine structure of the spectra is frequently finer than 200 m/s , so for stable recognition and investigation of these structures the standard measurments require improving spectral resolution at least 2 - 3 times or use model - based inversion techniques .it is clear that in some cases the model representations of the scattered signal are not in agreement with experiment , and in this case event leads to significant errors in the estimation of parameters .these errors may lead for example to the appearance of paradoxical negative spectral widths .an example of the signal distribution over the velocities and the spectral width at ekb radar is shown at fig.[fig:1 ] .impossibility of calculating the spectra of superdarn radar signals leads to complications in recovery methods for spectral parameters of the irregularities . at fig.[fig:1 ] one can see one consequence of these errors - the negative spectral widths calculated by fitacf program .these errors are associated with the processing algorithms for correlation functions and caused by the inability to restore the spectrum of the scattered signal from sounding results with standard sounding signals .thus , the problem of using optimal sounding signals in the superdarn technique has the one of the important values .feature of superdarn radars functioning is the requirement that the product of the spectral resolution and spatial resolution to be significantly less than the speed of light in vacuum .this means that the use of simple pulse signals ( for which this product is equal to the half of velocity of light ) in this problem is impossible and one needs the use of special signals in conjunction with additional approximations depending on the nature of the scattering .sequences and processing methods that implement these special signals , have been well studied in the incoherent scattering technique : these are multipulse sequences , random phase codes , alternating codes ( including the polyphase ones ) and the technique of effective subtraction . however , in the superdarn technique the product of necessary spectral resolution and range resolution is so small that one can effectively use only multipulse sequences . in this case , the signal reception is carried out by means of the transmission of the short pulses , which form the sounding sequence. therefore , superdarn radars currently used only multipulse sounding sequences that have necessary characteristics - 7-pulse standard sequence and 8-pulse katscan sequence , implementing the principle described in , as well as 13-pulse sequence based on mixing forward and reverse optimal golomb sequences .the convenient basis for the understanding of backscattering techniques is the concept of the weight volume .weight volume shows how much of the correlation function of the received signal is related to the correlation function of inhomogeneities , and corresponding set of ranges .the relationship between the scattering cross section of irregularities and the average correlation function of the received signal measured in the moment after transmitting sounding signal in a first approximation is given by the expression : where c - speed of light .the shape of the weight volume with standard correlation processing technique is determined by the shape of the sounding signal envelope : the spatial resolution varies for each of the delay of the correlation function , and is determined as the width of the weight volume at the fixed delay . from the structure of the weight volumeit is obvious that the spatial resolution at zero delay ( lag ) is determined by the total duration of the signal and can not be improved ( except in special subtraction techniques like ) .however , the spatial resolution at the other lags depends of the signal shape .resolution over the lags is inversely proportional to the spectral resolution and is equal to the total duration of the signal .for superdarn sounding signal , that is a sequence of elementary almost rectangular pulses ( 300us ) , spaced by multiples of ( n * 2.4ms ) and having total duration ( 64.8ms ) , we can define a spatial resolution of as width of weight volume for a given delay : property of the function is that it reaches the value 1 at the good lags , and 0 - at bad lags .if then this lag is with poor spatial resolution larger than the duration of a short elementary pulse , we will not take such sounding signals into consideration. therefore , the for synthesis of optimal signal , the requirements are easy to formulate as the requirements to ( and through it - to the weight volume and to the sounding signal ) , which is formed by the shape of the sounding signal . in the case of the superdarn techniquethe requirements for optimal signal are following : \1 .the weight volume is such that at all the lags , except zero one , the spatial resolution for all the lags that are multiples of the elementary interpulse period is excellent - not less than that of a single pulse : the requirement of the excellent spatial resolution for each point is based on the following qualitative consideration . in casesome of the points are made with worse spatial resolution , the energy of the weight volume for this lag is lost for analysis , so in this case this part of the energy of transmitted signal is used not for obtaining correlation function . as a result using signals , having points with spatial resolution worser than single pulse will decrease actual signal - to - noise ratio and require additional accumulation .examples of pulses having points with worse spatial resolution can be found , for example , in .lags at which the spatial resolution is equal to the elementary pulse one ( ) , are considered to be good ( good lags ) , and at which the correlation function of the scattered signal does not correspond to the scattering from the medium ( ) - are considered to be bad ( bad lags ) .3 the ratio of good lags number in the correlation function to the total number of lags in the correlation function must be maximized for a given number of pulses in the sequence .requirements and definitions 1 - 2 are known and used long time , requirement 3 ( optimality criterion for sounding sequence ) introduced for reasons of most efficient use of the received data - the highest possible number of lags in correlation functions of received signal should be used for processing , the same considerations were used in . to find the optimal sounding signals, we used a parallel algorithm for search of optimal sequences with maximal within the class of sequences with a fixed number of pulses .the search was made by using computing clusters isc sb ras blackford and academician v.m.matrosov ( http://hpc.icc.ru/ ) by monte carlo technique , modified for taking into account the peculiarities of the desired sequences . as a result of the calculations we obtained 8,9,10,11,12 and 13 pulse sequences .the sequences obtained and their characteristics are summarized in table [ tab:1 ] .table [ tab:1 ] also shows the characteristics of the optimal 7-pulse sequence obtained in , as well as 7- and 8-pulse sequences used in the superdarn method .superdarn 13-pulse sequence was not considered in the comparison because it has a number of lags with a spatial resolution worse than the elementary pulse duration .table [ tab:1 ] shows that the standard superdarn 7-pulse sequence is close to the optimum , while the katscan superdarn 8-pulse sequence is much less effective than 8 and 9 and the pulse sequence calculated by us . for a more detailed comparison of these sequences at figure 2we show plots of for these sequences . from these graphsit is clear that the sequences obtained by us have more good lags than 7- and 8-pulse sequences used previously .figure 2 shows that the found 9-pulse sequence ( ekb 36 - 44 ) has almost the same total duration as used now by 8-pulse katscan sequence , but the average power emitted by the radar in this case is 1.125 times more , which means it also more efficient in terms of energy . from the point of view of significant lags ,the found 9-pulse sequence has 28% more significant good lags than 8-pulse katscan , and its first bad lag is 36 instead of 6 for katscan .this makes 9-pulse ekb 36 - 44 more efficient for measurements than the standard 8-pulse katscan . as can be seen from table [ tab:1 ] and fig.[fig:2 ] , efficiency for the found 10 - 13-pulse sequences also close to the standard 7-pulse sequence efficiency , butallow to analyze more lags in the correlation function , and the first bad lag is always bigger than the first bad lag of standard 7-pulse sequence .analysis of the shape of the weight volume shows that the requirement of a minimum spatial resolution at all points is consistent with the requirement of uniqueness interpulse intervals in the sounding sequence .this requirement corresponds to the well - studied object in mathematics - golomb rulers .analysis has shown that optimality criterion for superdarn sounding signal we have formulated ( the maximum number of good lags in the weight volume ) meets the criterion of optimality of golomb rulers ( the maximum number of measurable numbers ) , and the task of finding the optimal sequence thus reduces to the well - known and developed problem of finding the optimal golomb rulers . for golomb rulers to date optimal sequences up to 27 are numerically found , and the proof is kept now for the optimal power 28 sequences by the network computing project ogr-28 .optimal golomb rulers are widely used in various practical problems .table [ tab:2 ] shows the sounding signals corresponding to optimal golomb rulers with , as well as their characteristics - number of all lags l and good lags m in correlation function , the efficiency and the average signal power p. at table [ tab:2 ] we show the first bad lag for each of the sequences .the table shows that the maximum length of part without bad lag that increases monotonically with increasing number of pulses is provided by 8,9,10,12,14,18,20,24 and 26 pulse sequences .the rest of the sequences is much more worse .nearly optimal golomb sequences differ from the optimal sequences by slightly larger length and may also be used as a basis for sounding signals .we will refer the sounding sequences constructed from the condition of maximal first bad lag in the class of optimal and nearly optimal golomb sequences ( ) as quasioptimal ones .we will refer the part of these sequences that correspond to the optimal golomb sequences as optimal ones .quasioptimal sequences provide a maximum of the first bad lag , but do not always have the minimal possible length .optimal sequence realize the maximal first bad lag and minimal length at the same time .table [ tab:3 ] shows the quasioptimal sequences for .an asterisk denotes optimal sequences . for the values of 21 - 23 and 25,27quasioptimal sequences are not listed due to lack of available data .table [ tab:3 ] shows that the standard 7-pulse superdarn signal is quasioptimal 7-pulse sequence in the framework of this approach , but 8-pulse katscan superdarn signal , as expected from table [ tab:1 ] , is not quasioptimal signal .if not to pay much attention to the relative amount of good lags in the correlation function , but only to the distance to the first bad lag , any of the quasioptimal sequences ( table [ tab:3 ] ) can be used for spectral measurements .as one can see from ( table [ tab:3 ] ) the sequence providing maximal relative efficiency over all the parameters ( maximal relative number of good lags and maximal relative position of first bad lag ) is 10-pulse optimal golomb ruler ( table [ tab:3 ] ) .briefly , two problems of using new sounding signals must be taken into account .the first problem is reducing the signal / noise ratio which decreases as the ratio of the number of pulses in a sequence to its full length due to signals scattered from the uncorrelated ranges .the second problem is more significant for superdarn radars , is consists in the statistical nature of the sounding , and so in order to determine the parameters of scattered signal one need statistical averaging .the accumulation time of the averaged signal for a fixed number of soundings is proportional to the total sequence length , which actually leads to increase in the required accumulation time as a square of the number of pulses in the sequence .therefore , when choosing of the sounding sequence ( increasing the number of pulses in the sounding sequence ) researcher is limited by required time resolution and radar energy . since the scattering cross - section of the irregularities investigated by superdarn radars is much higher than the cross section of incoherent scattering , we should expect that the optimal number of pulses in the sounding sequence may be greater than 7 suggested in , and we can use for the sounding more complex signals obtained in this work . as can be seen from the above that from the optimal and nearly optimal golomb rulers the quasioptimal sounding superdarn signals can be chosen . from these quasioptimal signals one can select the signal optimal for researcher based on the necessary temporal resolution and the energetic potential of the radar . for comparison ,when using 12-pulse quasioptimal sequence it is necessary to accumulate 3 times longer than for the standard 7-pulse superdarn signal , by taking into account the increased signal length .for a 10-pulse sequence accumulation time increases only twice .on the basis of optimal and nearly optimal golomb rulers , we have constructed quasioptimal superdarn sounding signals for the number of pulses from 7 to 26 ( table [ tab:3 ] ) , which simultaneously provide a relatively high efficiency ( differing from the optimal by not more than 10% ) and maximal first bad lag . to verify the assumption of the effectiveness of obtained long sounding sequences for superdarn radars and to obtain the first spectra of scattered signals with high spectral resolution , we conducted experiments at the ekb radar of the russian segment of superdarn radars .as showed the preliminary analysis , from the spectral measurements point of view the optimal sounding sequences for superdarn ( providing at the same time the maximal relative amount of good lags and maximal first bad lag ) seems to be 10,12,14,18,20,24 and 26-element golomb rulers ( table [ tab:3 ] ) .however , with increasing of the pulses number in the sequence significantly increases the required averaging time , so experimental verification was carried out only for relatively short 8 , 9 , 10 and 12-pulse sounding sequences .the first two of them - are quasioptimal , the last two - optimal ones .standard formulation of the experiment was to switch every 12 minutes between the 7,8,9,10 and 12 quasioptimal sounding sequences ( table [ tab:3 ] ) . in this case , the used 7-pulse sequence at the same time is a standard superdarn signal .compared to the standard mode of ekb radar , the accumulation time was increased twice ( from 4 up to 8 seconds ) to compensate for the increased duration of the sounding signal . to calculate the spectra we used the wiener - khinchin theorem , the discrete fourier transform with a window of 1000 samples and step interpolation of the correlation function at bad lags .further , all the spectral data will be given in doppler velocity units ( m / s ) , which is quite a standard representation for superdarn radar data .the experimental results are shown below . to compare parameters obtained by spectral and correlation processing ,the spectra were processed by the following simple algorithm : to reduce the influence of concentrated interference the average spectral power averaged over all radar gates ( distances ) is subtracted from spectral power at each range gate ; as the signal power the maximal spectral amplitude were used ; as the average doppler drift the first moment of the spectrum was used ; as the average spectral width the second central moment of the spectrum was used ; calculations of spectral parameters were carried out in a range of speeds + /-900 m/s to reduce the impact of concentrated interference at the 1000 m / s ( see fig.[fig:1 ] ) .comparison of dynamics of these parameters is shown at fig.[fig:3 ] .the comparison shows that the distributions of ranges with maximal signal amplitude , the maximal power and doppler drift for this range measured in the experiments are quite similar , and weakly dependent on the type of signal , which indicates validity of the results obtained from the scattered signal spectrum .spectral width is reduced with increasing number of pulses in the sequence . at fig.[fig:4 ]we show the velocities obtained by fitacf technique and spectral method . when comparing the velocity the comparison was made over the points , at which the fitacf technique prodices the correct result with signal - to - noise ratio .one can see the average linear proportionality of data obtained in the case of 7 , 8,9,10 pulse sequences , indicating the qualitative applicability of fitacf to interpret the data obtained by new signals , and the correctness of the signals themselves .similar results were obtained when we use the lmfit technique . at fig.[fig:5 ] we show a similar comparison between the spectral widths , calculated by the standard fitacf technique and spectral methods . at the fig.[fig:5 ] one can see the area of greatest concentration of values that are related to the constant spectral width of the spectrum at a level correspondent to the theoretical spectral broadening for a given sequence . atthe same the spectral width of these signals , calculated by the standard fitacf technique , differs significantly from the calculated from the spectrum .it should be noted that in some cases , especially for 7 , 10 and 12-pulse sequences the high spectral widths calculated by fitacf technique , are close to the spectral widths calculated from the shape of the spectrum .this proves that high spectral widths are calculated well by fitacf technique for new pulses too . at fig.[fig:6 ] we show the comparison of the scattered signal spectra in the case of the ground backscatter signal ( fig.[fig:6]a ) and ionospheric scatter ( fig.[fig:6]b ) for the case of the standard 7-pulse sequence and the new quasioptimal 8,9,10 and 12-pulse sequences .ranges are identical , measurement time is approximately the same ( with 1 hour accuracy ) .accumulation time 8 seconds . from fig.[fig:6]aone can see that there is an effective narrowing of the scattered signal spectrum , as it is expected for using of a longer sounding sequence and is characteristically for the ground backscatter signal , that is narrowband enough .doppler shift of the signal at the same time is less than 20 m/s , which is also quite typical for ground backscatter signal . sometimes , for example in the case of a 12-pulse sequence ( fig.[fig:6]a ) the spectrum has asymmetrical form , indicating the possible presence of multiple modes in the received signal .thus , the ground backscatter signal showed that the parameters obtained by new sounding signals are in qualitative agreement ( low spectral width , small doppler drifts ) with the expected characteristics . from fig.[fig:6]b onecan see that sometimes the signal scattered by ionospheric irregularities has an asymmetric spectra , sometimes with weak additional peaks ( as well seen for 12- and 10-pulse sequences at fig.[fig:6]b ) ) .standard 7-pulse sequence in spectral technique is less sensitive to these effects , and therefore it is necessary to use more sophisticated inversion techniques for the analysis of cases with complex or asymmetric spectra of the scattered signal .in this paper we formulate a criterion of quasioptimal superdarn sounding signals for spectral measurements , which consists from simultaneous requirements : to maximize the first bad lag number in the correlation functions of the received signal ; to decrease the density of bad lags , close to the absolute minimum ; to provide spatial resolution that not worser than elementary pulse duration at all delays except zero .solution of the problem corresponds to finding golomb rulers , satisfying the specified conditions . on the basis of optimal and nearly optimal golomb rulers we constructed quasioptimal sounding sequences satisfying the formulated principles , for the number of elementary pulses 7 - 20,24,26 ( table [ tab:3 ] ) .it is shown that according to this approach the standard superdarn 7-pulse sounding signal is quasioptimal one , and 8-pulse katscan superdarn sounding signal is not quasioptimal one .it is shown that the optimal ( for which all the required conditions are met , and the length of sequence becomes the minimal possible ) from this point of view are 10 , 12 , 14 , 18 , 20 and possibly 24 and 26-pulse optimal golomb rulers ( table [ tab:3 ] ) .it is shown that the sequence providing maximal relative efficiency over all the parameters ( maximal relative number of good lags and maximal relative position of first bad lag ) is 10-pulse optimal golomb ruler ( table [ tab:3 ] ) .experiments were made for sounding by the quasioptimal 8 , 9 , 10 and 12-pulse sequences at ekb radar and it is shown the continuity of the obtained data with data obtained by the standard 7-pulse sequence .we also obtained the expected improvement in spectral resolution .the authors are grateful to isdct sb ras for providing computing clusters isc sb ras - blackford and academician v.m.matrosov ( http://hpc.icc.ru ) for numerical calculations .the authors thank michael feiri ( university of twente , enschede , netherlands ) for the results of the calculation of nearly optimal golomb sequences for ( http://www.feiri.de/ogr/nearopt.html ) and pavlo ponomarenko ( university of saskatchevan , saskatoon , canada ) for useful discussions .16 barthes , l. , r. andr , j .- c .cerisier , and j .-p . villain ( 1998 ) , separation of multiple echoes using a high - resolution spectral analysis for superdarn hf radars// _ radio sci .33_(4 ) , 10051017 , doi:10.1029/98rs00714 .berngardt o.i . and d.s .kushnarev , effective subtraction technique at the irkutsk incoherent scatter radar : theory and experiment// _ journal of atmospheric and solar - terrestrial physics _ , _ 105 - 106 _ , 293298 , 2013 , doi : 10.1016/j.jastp.2013.03.023 chisham g. , m. lester , s. e. milan , m. p. freeman , w. a. bristow , a. grocott , k. a. mcwilliams , j. m. ruohoniemi , t. k. yeoman , p. l. dyson , r. a. greenwald , t. kikuchi , m. pinnock , j. p. s. rash , n. sato , g. j. sofko , j .-villain , a. d. m. walker , a decade of the super dual auroral radar network ( superdarn ) : scientific achievements , new techniques and future directions//_surveys in geophysics _ , _28_(1 ) , 33109 , 2007 , doi : 10.1007/s10712 - 007 - 9017 - 8 danskin , d. w. , a. v. koustov , r. a. makarevitch , and m. lester , observations of double - peaked e region coherent spectra with the cutlass finland hf radar// _ radio sci .39_(2 ) , rs2006 , 2004 , doi:10.1029/2003rs002932 .greenwald r.a . ,k.oksavik , r.barnes , j. m.ruohoniemi , j.baker and e.r .talaat , first radar measurements of ionospheric electric fields at sub - second temporal resolution//_geophys.res.lett .35_(3 ) , l03111 , 2008 hanuise c. , villain j.p ., gresillon d. , cabrit b. , greenwald r.a . ,baker , k.b . , interpretation of hf radar ionospheric doppler spectra by collective wave scattering - theory//annales geophysicae , 11(1 ) , 2939 , 1993 ribeiro a. j. , j. m. ruohoniemi , p. v. ponomarenko , l. b. n. clausen , j. b. h. baker , r. a. greenwald , k. oksavik , s. de larquier , a comparison of superdarn acf fitting methods//_radio sci .48_(3 ) , 274282 , 2013 , doi : 10.1002/rds.20031 schiffler a. , superdarn measurements of duble - peaked velocity spectra , m.sc .thesis , university of saskatchewan , saskatoon , 1996 schiffler a. , g.sofko , p.t .newell and r.greenwald , mapping the outer llbl with superdarn double - peaked spectra//_geophysical research letters _ , _24_(24 ) , 31493152 , 1997 , doi : 10.1029/97gl53304 sulzer , m. p. , a radar technique for high range resolution incoherent scatter autocorrelation function measurements utilizing the full average power of klystron radars// _ radio sci .21_(6 ) , 10331040 , 1986 , doi:10.1029/rs021i006p01033 .
|
the requirements for an optimal sequence for superdarn radar are stated . obtained quasioptimal signals up to 26-pulse sequence are based on optimal and nearly optimal golomb rulers . it is shown that standard 7-pulse superdarn sequence is one of the quasioptimal sounding signals , but 8-pulse kascan superdarn signal is not . the characteristics are calculated for quasioptimal sounding sequences up to 26-pulse one . it is shown that the most effective signals for the spectral measurements are 10 , 12 , 14 , 18 , 20 , 24 and 26 pulse sequences . we present the results of the first spectral measurements at the ekb superdarn radar with quasioptimal 8- and 9-pulse sequences , as well as with the optimal 10-and 12-pulse sequences . a comparison with the results obtained with standard 7-pulse superdarn sequence were made . the continuity of results in amplitude and velocity is shown , the expected improvement in the spectral width are also demonstrated .
|
the existence of a new basic phase in vehicle flow on multilane highways called the synchronized motion was recently discovered by kerner and rehborn , impacting significantly the physics of traffics as a whole . in particular , it turns out that the spontaneous formation of moving jams on highways proceeds mainly through a sequence of two transitions : `` free flow synchronized motion stop - and - go pattern '' . besides , all these transitions exhibit the hysteresis . as follows from the experimental data the synchronized mode is essentially a multilane effect .recently kerner assumed that the transition `` free flow synchronized mode '' is caused by `` z''-like form of the overtaking probability depending on the car density .there have been proposed several macroscopic models dealing with multilane traffic flow .both these models specify the traffic dynamics completely in terms of the car density , mean velocity , and , may be , the velocity variance or ascribe these quantities to the vehicle flow at each lane individually .nevertheless , a quantitative description of the synchronized mode is far from being developed well because of its complex structure . in particular, it can form the totally homogeneous ( _ i _ ) and homogeneous - in - speed ( _ ii _ ) flows . especially in the latter case there is no explicit relationship between the mean car velocity and density , with the value of being actually constant and less then that of free flow .the other important feature is the key role of some cars bunched together and traveling much faster than the typical ones , which enables to regard them as a special car group .therefore , in the synchronized mode the function of car distribution in the velocity space should have two maxima and we will call such fast car groups platoons in speed . these features of the synchronized mode have been substantiated also in using single - car - data . in particular , it has been demonstrated that the synchronized mode exhibits small correlations between fluctuations in the car flow , velocity and density .there is only a strong correlation between the velocities at different lanes taken at the same time and decreasing sufficiently fast as the time difference increases .by contrast , there are strong long - time correlations between the flow and density in the free flow state as well as the stop - and - go mode . keeping in mind a certain analogy with aggregation processes in physical systems mahnke et al . proposed a kinetic model for the formation of the synchronized mode treated as the motion of a large car cluster . in the present paper following practically the spirit of the landau theory of phase transitions we develop a phenomenological approach to the description of this process .we ascribe to the vehicle flow an additional _ internal _ parameter will be called below the order parameter characterizing the possible correlations in the vehicle motion at different lanes and write for it a governing equation . for the car motion where drivers do not change laneat all we set , in the opposite limit .for fixed values of and the order parameter is assumed to be uniquely determined , thus , for a uniform vehicle flow we write : where is the delay time and the function fulfills the inequality : we note that the time characterizes the delay in the driver decision of changing lanes but not in the control over the headway , so , this delay can be prolonged . the particular value of the order parameter results from the compromise between the danger of an accident during changing lanes and the will of driver to move as fast as possible .obviously , the lower is the mean vehicle velocity for a fixed value of , the weaker is the lane - changing danger and the stronger is the will to move faster .besides , the higher is the vehicle density for a fixed value of , the stronger is this danger ( here the will has no effect ) .thus , the dependence is an decreasing function of and , so , due to ( [ 2.2 ] ) : with the latter inequality being caused by the danger effect only .equation ( [ 2.1 ] ) describes actually the behavior of the drivers that prefer to move faster than the statistically mean vehicle and whose readiness for risk is greatest .exactly this group of drivers ( platoons in speed ) govern the value of .there is , however , another characteristics of the driver behavior , it is the mean velocity chosen by the _ statistically mean _ driver taking into account also the danger resulting from the frequent lane changes by the `` fast '' drivers .following typical assumptions the velocity as a function of is considered to be decreasing : where is the upper limit vehicle density on road . in general, the dependence of on should be increasing for small values of the vehicle density , , because in this case the lane - changing makes no substantial danger to traffic and practically all the drives can pass by vehicles moving at lower speed without risk .by contrast , when the vehicle density is sufficiently high , , the lane - changing is due to the car motion of the most `` impatient '' drivers whose behavior makes an additional danger to the main part of other drivers and the velocity has to decrease as the order parameter increases . for certain intermediate values of the vehicle density , , this dependence is to be weak as well as near the boundary points , so : then the governing equation ( [ 2.1 ] ) takes the form : \label{2.4}\ ] ] and the condition specifies the steady state dependence of the order parameter on the vehicle density .let us , now , study properties and stability of this steady state solution . from eq .( [ 2.4 ] ) we get as mentioned above , the value of is solely due to the danger during changing lanes , so this term can be ignored until the vehicle density becomes sufficiently high .thus , in a certain region the derivative by virtue of ( [ 2.3 ] ) and ( [ intro:3 ] ) and the function is increasing or decreasing for or , respectively .this statement follows directly from the relation . for long - wave perturbations of the vehicle distribution on a highway the density be treated as a constant .so , according to the governing equation ( [ 2.4 ] ) , the steady - state traffic flow is unstable if .due to ( [ 2.2 ] ) and ( [ 2.3a ] ) the first term in the expression for in ( [ 2.7 ] ) is dominant in the vicinity of the lines and , thus , in these regions the curve is increasing and the steady state traffic flow is stable .for the value , inequality ( [ 2.3a ] ) , and , thereby , the region corresponds to the stable vehicle motion .however , for there can be an interval of the order parameter where the derivative changes the sign and the vehicle motion becomes unstable .therefore , as the car density grows causing the increase of the order parameter it can go into the instability region wherein . under these conditionsthe curve is to look like `` s '' ( fig .[ f6]a ) and its decreasing branch corresponds to the unstable vehicle flow .the lower increasing branch matches the free - flow state , whereas the upper one should be related to the synchronized phase because it is characterized by the order parameter coming to unity .-plane and the form of the curve displaying the dependence of the order parameter on the vehicle density . ]the obtained dependence actually describes the first order phase transition in the vehicle motion . indeed , when increasing the car density exceeds the value the free flow becomes absolutely unstable and the synchronized mode forms through a sharp jump in the order parameter . if , however , after that the car density decreases the synchronized mode will persist until the car density attains the value .it is a typical hysteresis and the region corresponds to the metastable phases of traffic flow .it should be noted that the stated approach to the description of the phase transition `` free flow synchronized mode '' is rather similar to the hypothesis by kerner about `` z''-like dependence of the overtaking probability on the car density that can cause this phase transition .let us , now , discuss a possible form of the fundamental diagram showing ] .[ f7]a displays the dependence of the mean vehicle velocity on the density for the fixed limit values of the order parameter or 1 . for small values of curves practically coincide with each other .as the vehicle density grows and until it comes close to the critical value where the lane change danger becomes substantial , the velocity practically does not depend on .so at the point at which the curves and meet each other the former curve , , is to exhibit sufficiently sharp decrease in comparison with the latter one .therefore , on one hand , the function has to be decreasing for . on the other hand , at the point for the effect of the lane change danger is not extremely strong , it only makes the lane change ineffective , ( compare ( [ 2.3a ] ) ) .so it is reasonable to assume the function increasing neat the point . under the adopted assumptions the relative arrangement of the curves , demonstrated in fig .[ f7]b , and fig .[ f7]c shows the fundamental diagram of traffic flow resulting from fig .[ f6 ] and fig .[ f7]b . ) and the vehicle flux ( ) vs. the vehicle density for the limit values of the order parameter as well as the resulting fundamental diagram ( ) . ]the developed model predicts also the same type phase transition for large values of the order parameter .in fact , in an extremely dense traffic flow changing lanes is sufficiently dangerous and the function describing the driver behavior is to depend strongly on the vehicle density as .in addition , the vehicle motion becomes slow . under such conditions the former term in the expression for in ( [ 2.7 ] ) should be dominant and , so , and the stable vehicle motion corresponding to is characterized by the decreasing dependence of the order parameter on the vehicle density for .therefore , as the vehicle density increases the curve can again go into the instability region ( in the -plane ) , which has to give rise to a jump from the synchronized mode to a jam .the latter matches small values of the order parameter ( fig . [ f6]b ) , so , it should comprise the vehicle flows along different lane where lane changing is depressed , making them practically independent of one another .we have introduced an additional state variable of the traffic flow , the order parameter , that accounts for internal correlations in the vehicle motion caused by the lane changing .since such correlations are due to the `` many - body '' effects in the car interaction the order parameter is regarded as an independent state variable . keeping in mind general properties of the driver behaviorwe have written the governing equation for this variable .it turns out that in this way such characteristic properties of the traffic flow instability as the sequence of the phase transitions `` free flow synchronized motion jam '' can be described without additional assumptions .moreover , in this model both the phase transitions are of the first order and exhibits hysteresis . besides , the synchronized mode corresponds to highly correlated vehicle flows along different lanes , , whereas in the free flow and the jam these correlations are depressed , .so , the jam phase actually comprises mutually independent car flows along different lanes .kerner and h. rehborn , phys . rev .* e * * 53 * , r4275 ( 1996 ) .b.s . kerner , phys .* 81 * , 3797 ( 1998 ) .b.s . kerner and h. rehborn , phys .* e * * 53 * , r1297 ( 1996 ) .b.s . kerner and h. rehborn , phys .lett . * 79 * , 4030 ( 1997 ) .b.s . kerner , physics world , august , 25 ( 1999 ) b.s .kerner , in : _ transportation and traffic theory _, ed . by a. ceder ( pergamon , amsterdam , 1999 ) , p. 147 .d. helbing , _ verkehrsdynamik _ ( springer - verlag , berlin , 1997 ) .d. helbing and m. treiber , phys .* 81 * , 3042 ( 1998 ) .m. treiber , a. hennecke , and d. helbing , phys . rev .* e * , * 59 * , 239 ( 1999 ) . v. shvetsov and d. helbing , phys . rev .* e * , * 59 * , 6328 ( 1999 ) .a. klar and r. wegener , siam j. appl ., * 59 * , 983 , 1002 ( 1999 ) .lee , d. kim , and m.y .choi , in : _ traffic and granular flow97 _ , ed .m. schreckenberg and d.e .wolf ( springer - verlag , singapore , 1998 ) , p. 433 .t. nagatani , physica * a 264 * , 581 ( 1999 ) .t. nagatani , phys . rev . * e * , * 60 * , 1535 ( 1999 ) .l. neubert , l. santen , a. schadschneider , and m. schreckenberg , e - print : cond - mat/9905216 ( 1999 ) .r. mahnke and n. pieret , phys . rev . * e * , * 56 * , 2666(1997 ) .r. mahnke and j. kaupus , phys . rev . *e * , * 59 * , 117 ( 1999 ) .
|
we discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that could explain in a simple way such observed self - organizing phenomena as the sequence of the phase transitions `` free flow synchronized motion jam '' and the hysteresis in them . we introduce a new variable called order parameter that accounts for possible correlations in the vehicle motion at different lanes . so , it is principally due to `` many - body '' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the `` one - particle '' distribution function . therefore , we regard the order parameter as an additional independent state variable of traffic flow and formulate the corresponding evolution equation governing the lane changing rate . in this context we analyze the instability of homogeneous traffic flow manifesting itself in both of these phase transitions and endowing them with the hysteresis . besides , the jam state is characterized by the vehicle flows at different lanes being independent of one another .
|
the investigation of open quantum systems from various different perspectives has been subject of intense research in recent years motivated by fundamental questions , and also due to their crucial role in the realization of quantum information protocols in real world situations .one interesting approach to address open quantum systems is through the information flow among constituents of composite quantum systems , or in particular , to explore the exchange of information between the system of interest and its surrounding environment . from the point of view of memory effects, the dynamical quantum maps are usually divided in two groups , namely , markovian and non - markovian maps .memoryless processes are often recognized as markovian , where the information is expected to monotonically flow from the system to the environment . on the other hand ,it is rather natural to assume that the backflow of information from the environment to the system is connected to the presence of memory effects , because in these cases the future states of the system may depend on its past states as a result of the inverse exchange of information .quantum markovian maps are traditionally defined as the ones obtained from the solutions of lindblad type master equations , which can be described by quantum dynamical semigroups .thus , manifestation of memory effects in the form of recoherence and blackflow of information has been associated to the violation of the semigroup property . however , for such memory effects to emerge , failure to satisfy the semigroup property is not sufficient .the quantum map should also violate another property called divisibility .recently , there has been an ever increasing interest in the non - markovian nature of quantum processes and quantifying their degree of non - markovianity using several distinct criteria .whereas some authors have directly adopted the property of divisibility as the defining feature of quantum markovian processes , others have employed different means to identify memory effects , which are not exactly equivalent but closely related to divisibility approach .in fact , unlike its classical counterpart , there is no universal definition of non - markovianity in the quantum domain , and different measures do not coincide in general .yet , it is reasonable to believe that conceptually different measures capture complementary aspects of the same phenomenon .one of the most widely studied and significant quantifiers of the degree of non - markovianity has been proposed by breuer , laine and piilo ( blp ) . rather than defining non - markovianity based on the violation of divisibility , the blp measure intends to determine the amount of non - markovianity of a quantum process by checking the trace distance between two arbitrary states of the open system during the dynamics , which in fact quantifies the probability of successfully distinguishing these two states . considering that the ability of distinguishing two objects is in a sense related to how much information we have about them, it is claimed that the monotonic reduction of distinguishability can be directly interpreted as a one - way flow of information from the system to the environment , which defines a markovian quantum process .in contrast , if there is a temporary increase of trace distance throughout the time evolution of the system , then the quantum map is said to be non - markovian due to the backflow of information from the environment to the system .another popular approach to quantify the degree of non - markovianity is based on a well known property of local completely positive trace - preserving ( cptp ) maps , that is , on their inability to increase entanglement between a system and an isolated ancilla .rivas , huelga and plenio ( rhp ) have introduced a witness for non - divisibility of quantum maps by making use of the monotonic behavior of entanglement measures under cptp maps .although this quantity does not provide a necessary and sufficient condition for divisibility , it can be adopted as a measure of non - markovianity on its own since it has been shown that it encapsulates the information exchange between the system and environment through the concept of accessible information . in this case , a non - markovian process is characterized by a temporary increase of entanglement between the system and the isolated ancilla , which is an indicator of the backflow of information from the environment to the system . in the same spirit , luo , fu , and song ( lfs ) have proposed a similar quantity that relies on the mutual information between a system and an arbitrary ancilla instead of entanglement . despite being easier to manage than entanglement - based measure mathematically , especially for high dimensional systems, this quantity does not yet have an interpretation directly related to the flow of information between the system and the environment . in this work ,our aim is twofold .first , using the language of the decoherence program , where a system is coupled to a measurement apparatus , which in turn interacts with an environment , we introduce a simple scheme to demonstrate how quantum loss can be utilized to describe the backflow of information from the environment to the system .this approach is shown to be exactly equivalent to the lfs measure of non - markovianity and thus gives it an interpretation in terms of information exchange between the system and the environment .furthermore , we reveal how the entanglement and the mutual information based measures of non - markovianity are conceptually related to each other through the connection between quantum loss and accessible information .second , we extend the results of ref . in several new directions .in particular , we investigate the role of both accessible information and quantum loss in quantifying non - markovian behavior , via conceptually different means of information flow , in two paradigmatic models .we find out that , unlike the blp measure , both lfs and rhp measures can capture the dynamical information in the non - unital aspect of the dynamics and thus can successfully identify non - markovian behavior in corresponding models .lastly , for a two - level system undergoing relaxation at zero temperature , we experimentally demonstrate the connection between quantum loss and quantum mutual information performing a quantum simulation with an all optical setup that allows full access to the environmental degrees of freedom . this paper is organized as follows . in sec.iiwe introduce the definitions of the considered non - markovianity measures , and discuss how they are related to the flow of information between the system and the environment .we also present a clear conceptual connection between the quantum loss and the accessible information in quantifying information exchange . in sec .iii , using the rhp , lfs and blp measures , we examine two examples of paradigmatic quantum channels theoretically , and present the experiment and its results .section iv includes the discussion and summary of our findings .let us first define the type of quantum processes that we consider .we assume that a dynamical quantum map is described by a time - local master equation of the lindblad form with the lindbladian super - operator given as + \sum_{i}\gamma_{i}\left[a_i\rho a_i^\dagger-\frac{1}{2}\left\{a_i^\dagger a_i,\rho\right\}\right],\nonumber\end{aligned}\ ] ] where is the hamiltonian of the system , s are the decay rates , and s are the lindblad operators describing the type of noise affecting the system. provided s and s are time independent , and also all s are positive , eq . ( [ master ] ) leads to a dynamical semigroup of cptp maps ] , which takes the state at time to the state at time .such markovian maps have a fundamental property that they satisfy the condition of divisibility .in particular , a cptp map can be expressed as a composition of two other cptp maps as with ] .hence , the trace distance can in fact be thought as a quantifier of the distinguishability of two states , variation of which during the evolution can be interpreted as an information exchange between the system and the environment . in particular , a monotonic loss of distinguishability between and during the dynamics , i.e. , indicates that information flows from the system to the environment at all times and thus the process is markovian . on the other hand, means that there exists a backlow of information from the environment to the system , giving rise to a non - markovian process .based on this criterion , the blp measure is defined as where the maximum is taken over all possible pairs of initial states and .we should also note that the above equation can also be equivalently expressed as ,\label{sblpm}\ ] ] where time intervals correspond to the regions where , and maximization is done over all pairs of initial states . even though this optimization is a considerably hard task , it is possible to simplify the procedure in several ways by reducing the number of possible optimizing pairs .furthermore , considering the fact that cptp maps are contractions for the trace distance , one can show that the distinguishability between and is guaranteed to monotonically decrease for all divisible processes . thus , according to the blp measure , all divisible dynamical maps define markovian processes .nonetheless , the inverse statement is not necessarily true , that is , there exist non - divisible maps for which the trace distance does not show any temporary revival at all .in fact , the trace distance is actually a witness for non - divisibility .in addition , it has been recently shown that the trace distance is not able to capture the dynamical information in the non - unital aspect of quantum dynamics .consequently , it fails to identify the non - markovianity originated from the non - unital part of the transformation . in this section ,we introduce a new method of quantifying non - markovianity through the flow of information between the system and the environment , using a conceptually different approach than the blp measure .our discussion relies on the decoherence program , where a quantum system is coupled to a measurement apparatus , which in turn directly interacts with an environment .let us first consider a quantum system that is initially correlated with the apparatus .we assume that the bipartite system starts as a pure state , and the environment only affects the state of the apparatus . as a result of the interaction ,there emerges an amount of correlation among the individual parts of the closed tripartite system , and thus the environment acquires information about the system by means of the interaction with the apparatus .this setting is graphically sketched in fig .[ fig1 ] , where the system evolves trivially while the apparatus is in a direct unitary interaction with the environment .the final state of the composite tripartite system is given by where tilde denotes the state of the subsystems after the time evolution .the resulting state of each part of the composite system can be obtained by tracing over the remaining parts .particularly , if we discard the environment , we obtain the bipartite state of the system and the apparatus as which corresponds to applying a general cptp map to the apparatus while leaving the state of the system untouched . , and an entangled pure state .as the system evolves free of any direct interaction , the apparatus is interacting with the environment .,scaledwidth=47.0% ] let us now introduce some preliminary concepts that will be relevant to our treatment of the information exchange between the system and the environment . for a bipartite system , while the conditional quantum entropy is defined as the quantum mutual information is given by where denotes the von - neumann entropy of the considered systems , characterizing the uncertainty about them .provided that we have a quantum system in a pure state , we obtain and , as a result , .moving to tripartite system , we can define the conditional quantum entropy of and conditionally on z as whereas the quantum ternary mutual information reads it is important to note that , and the quantum ternary mutual information vanishes for a pure tripartite state , i.e. , . in the following , we adopt the terminology introduced in ref . . making use of the analogy with the classical information theory, we can make an entropy diagram for the composite system of to show how each part of the tripartite system share and exchange information among its subsystems .for this purpose , we define three quantities that will be very useful to describe the information dynamics of the tripartite system , namely , the quantum mutual information , the quantum loss , and the quantum noise : in fig .[ fig2 ] , we display the entropy diagram for using the above quantities . the quantum mutual information quantifies the amount of residual mutual entropy between the system and the apparatus after the decoherence occurs . on the other hand ,the quantum loss represents the amount of information that is getting lost in the environment .actually , among these three quantities , only and are relevant to us , since information exchange between the system and the environment is characterized by the balance between them .it is very important to emphasize that the equality holds at all times during the dynamics .that is , twice the initial entropy of the system will be redistributed to the apparatus and the environment as decoherence takes place . in other words , the total amount of information inside the closed thick red line in fig .[ fig2 ] will remain invariant .indeed , as the composite tripartite system evolves in time , the environment will learn about the system , and the quantum mutual information will start to decrease as a result of its monotonicity under local cptp maps .this will be directly reflected as an increase in the quantum loss , which is naturally zero initially , as can be observed from eq .( [ il ] ) .however , it is also possible that might temporarily revive during dynamics , which will give rise to a temporary decrease in . , the apparatus , and the environment , before and after the interaction .the amount of information will stay the same inside the area enclosed by thick red curves , i.e. , , where is the mutual information and is the quantum loss.,scaledwidth=48.0% ] regarding non - markovianity as a phenomenon that is intrinsically related to the backflow of information from the environment to the system , it is reasonable to expect the quantum loss to monotonically increase for markovian dynamics , since it is an entropic measure of information that the environment acquires about the system .therefore , one can define non - markovian processes as the ones for which there is a temporary loss of as the system evolves in time , i.e. , since this is an indication that the information flows back to the system from the environment .we should clarify that when we say information flow from the system to the environment or vice versa , we do not actually mean that total information content of the system changes , since it is constant at all times due to the fact that the system does not directly interact with the environment .rather , we mean that information is being redistributed in the tripartite composite system in such a way that the amount of information that the system shares with the environment increases or decreases , as depicted in fig .[ fig2 ] . at first sight, one might think that evaluation of the quantum loss requires the knowledge of the state of the environment , which typically consists of infinite number of degrees of freedom , and is virtually impossible to access in real world situations .however , we actually do not need to directly access the environment to be able to calculate .it can be explicitly written without dependence on the environment , an interesting point is that the quantum loss can also be rewritten as a difference of the initial and final mutual information shared by the system and the apparatus , that is , as can be easily seen from eq .( [ il ] ) , since the initial mutual information is twice the initial entropy of the system .taking the time derivative of this simple equation , we find that recalling that the lfs measure of non - markovianity is based on the rate of change of the quantum mutual information shared by the system and the apparatus , we immediately realize that in fact the quantum loss approach to non - markovianity is exactly equivalent to the formulation of the lfs measure .in particular , the lfs measure captures the non - markovian behavior through a temporary increase of the mutual information of the bipartite system .mathematically , the lfs measure can be written as where the maximization is evaluated over all possible pure initial states of the bipartite system .thus , the quantum loss gives an interpretation to the lfs measure in terms of information exchange between the system and the environment , since any temporary loss of will be observed as a temporary revival of by the same amount . note that it directly follows from the composition law of divisibility given in eq .( [ div ] ) that the lfs measure vanishes for all divisible quantum processes due to the monotonicity of the mutual information under cptp maps .we should still keep in mind that the inverse statement is not always true .some non - divisible maps do not increase the mutual information , or equivalently , decrease the quantum loss at all . and the inaccessible information in the entropy diagram of the tripartite system after the interaction of the apparatus with the environment .,scaledwidth=30.0% ] next , we introduce another quantity known as the accessible information , which quantifies the maximum amount of classical information that can be extracted about the system by locally observing the environment , , \label{ai}\ ] ] where defines a complete positive operator valued measure ( povm ) acting on the state of the environment , and is the remaining state of subsystem after obtaining the outcome with the probability . considering the fact that the quantum loss is nothing but the quantum mutual information between the system and the final state of the environment after decoherence , , it is possible to express it as where ( known as the quantum discord in literature as a genuine measure of non - classicality ) quantifies the part of the quantum mutual information that the environment can not access about the system locally during the decoherence process .in other words , despite the system and the environment have the information in common , we can only access a fraction of it , namely , by just observing the state of the environment . in fig .[ fig3 ] , we display the accessible and and inaccessible information in the entropy diagram of after the interaction starts to take place . returning to the discussion of non - markovianity , one might argue that it is also quite reasonable to define non - markovianity in terms of information flow using the accessible information instead of the quantum loss .the reason is that the accessible information measures the fraction of information that the environment can actually access about the system , rather than the total amount of information they share as quantified by the quantum loss .similarly to the case of , we do not need any information about the state of the environment to be able to evaluate the accessible information .remembering that the environment is initially in a pure state , that is , we consider a zero temperature reservoir , and also that the tripartite state stays pure at all times , the koashi - winter relation implies that where denotes the entanglement of formation shared by the system and the apparatus , which is a resource - based measure quantifying the cost of generating a given state by means of maximally entangled resources .it is given by where with being the eigenvalues of the product matrix in decreasing order .here , , is the pauli spin operator in y - direction , and is obtained from via complex conjugation . since the system does not interact directly with the environment , we know that its state is invariant in time throughout the dynamics , then the time derivative of the koashi - winter relation given in eq .( [ kw ] ) leads to a simple relation between the rate of changes of the entanglement of formation and the accessible information , this relation immediately implies that any temporary decrease in will be reflected as a temporary increase of .thus , the non - markovianity measure based on the rate of change of the accessible information can also be expressed in terms of the rate of change of the entanglement of formation between the system and the apparatus . at this point, we recall that the basis of the entanglement - based rhp measure of non - markovianity is the monotonic behavior of entanglement measures under local cptp maps . in particular , according to the rhp criterion , any temporary revival of entanglement is an indication of the non - markovian nature of a quantum process .the rhp measure depends on the rate of change of the entanglement shared by the system and the apparatus , and thus can be written as where the maximization is evaluated over all possible pure initial states of the bipartite system . with the help of eq .( [ ej ] ) , it is now straightforward to observe that the entanglement - based rhp measure of non - markovianity is indeed exactly equivalent to the accessible information approach . in other words ,when entanglement of formation is chosen as a measure of entanglement , the rhp measure quantifies the total amount of decrease in the information that the environment can access about the system .the composition law of divisibility given in eq .( [ div ] ) implies that the rhp measure vanishes for all divisible quantum processes , just as in the case of the lfs measure , and again the inverse statement might not be always true since it is possible for some non - divisible maps not to increase entanglement .it becomes clear with our interpretation that the lfs and rhp measures of non - markovianity , despite being based on different physical quantities , are closely related to each other conceptually when the flow of information between the system and the environment is considered . from this angle ,the only difference between them is the local accessibility of the information , that environment and the system have in common , by observing the environment .especially , investigating the problem of information exchange from the point of view of the decoherence program , we demonstrate that both the mutual information and entanglement are relevant quantities for quantifying non - markovianity as a backflow of information from the environment to the system . .( c ) implementation of the amplitude damping channel for the photon polarization , condition .,scaledwidth=43.0% ] getting back to the optimization of the lfs and the rhp measures , we should emphasize that it is in fact not necessary to perform the optimization over all variables appearing in the pure bipartite density matrix of .we can actually simplify the optimization procedure for both lfs and rhp measures without loss of any generality as follows .for instance , in case of a pure two - qubit system , one can consider a general mixed single qubit density matrix for the apparatus , having only three real parameters , and then purify it to obtain the two - qubit pure state of the bipartite system .it is known that all purifications of the apparatus can be obtained by applying unitary operations locally on the system .also note that the entanglement and quantum mutual information of remain invariant under these operations .additionally , taking into account that the system does not directly interact with the environment , the simplification is justified and three real variables are sufficient to perform the optimization . besides , note that we have assumed the environment to be initially in a pure state .this assumption does not hold in general , in particular , when we consider a finite temperature environment . in this case , the initial state of the environment is mixed , and the koashi - winter relation given in eq .( [ kw ] ) becomes an inequality .however , we can purify the state of the environment by extending the hilbert space with a complementary subsystem , without loss of generality .consequently , we can again use the koashi - winter relation , which gives .we now see that the entanglement between the system and the apparatus is still connected to the information that the bipartite system can access about the system .it is rather straightforward to see that a similar treatment can be done for the case of the quantum loss and the lfs measure via the extension of the environment with an extra purifying system .in this section , we discuss the similarities and differences between the distinct ways of quantifying non - markovianity based on information exchange between the system and the environment , considering two relaxation models for open quantum systems .first , we examine the zero temperature relaxation channel , for which we present an all optical experimental simulation that realizes the required scenario to investigate the information flow in terms of the quantum loss and the accessible information .second , we theoretically examine the generalized amplitude damping channel .we show that in this context there exist differences between the accessible information and quantum loss approaches . herewe treat the apparatus as a two - level quantum system interacting with a zero temperature relaxation environment described by a collection of bosonic oscillators .the corresponding interaction hamiltonian is given by where denote the raising and lowering operators of the apparatus having the transition frequency , and .the annihilation and creation operators of the environment are represented by and , respectively , with the frequencies .we assume that the environment has an effective spectral density of the form ] and $ ] , and the quantum map represented by the above set of operators is unital , that is , if and only if or .similarly to what was done in , to construct a quantum process , we choose the parameters as and , where is a real number . before comparing the different approaches to monitor the information exchange between and ,let us first introduce another quantity which has been introduced in , based on the choi - jamiolkowski isomorphism , where is the maximally entangled state of the system and the apparatus in the considered dimension . in fact , the condition is a necessary and sufficient criterion for non - divisibility of dynamical quantum maps .moreover , one can also define a measure of non - divisibility using by summing it over time during the time evolution of the open system .therefore , with the help of eq .( [ gdiv ] ) , we can investigate relation of information exchange , quantified through the quantum loss and the accessible information , to the regions of non - divisibility where the intermediate maps are not cptp . for the generalized amplitude damping channel , it turns out that ,\ ] ] where . while we show the graphs of the quantum loss ( solid blue line ) and the quantum mutual information ( dashed red line ) in fig .[ fig7]a , we display the accessible information ( solid blue line ) and the entanglement of formation ( dashed red line ) in fig .[ fig7]b . in all the plots, the initial state is taken as a maximally entangled one , and we set .the regions of non - divisibility are displayed by the intervals where is positive in fig .we note that , as expected due to eq .( [ dli ] ) and eq .( [ ej ] ) , and , and and behave in an exact opposite manner .it is remarkable that , unlike the trace distance , both approaches based on information flow through entropic quantities reveal an exchange of information between the system and the environment .however , comparing the regions with to the intervals where and temporarily decrease , we see that they do always not coincide , which is in contrast to the zero temperature relaxation model in the previous section .this model demonstrates an explicit example of how the occurrence of non - divisibility throughout the dynamics of the open system might not always imply flow of information from the environment back to the system , even when the information exchange is measured via the quantum loss and the accessible information .furthermore , another interesting observation is that although the quantum loss monotonically increases until in fig .[ fig7]a , the accessible information decreases temporarily starting from in fig .this clearly demonstrates that , despite their conceptual similarities , and do not have to agree on the backflow of information from the environment to the system , and can grow or decay independent of each other .nonetheless , we note for the considered model that the accessible information diminishes for some time in all intervals where becomes positive .we have presented a detailed investigation of the relation between the non - markovianity in quantum mechanics and the flow of information between the system and the environment .our treatment is based on the approach of assisted knowledge where we consider a principal system that is initially correlated with its measurement apparatus .although there is no direct interaction between the system and the environment , there still exists an exchange of information among the constituents of the tripartite system , due to the fact that the apparatus interacts with the environment .centered on this scenario , we have introduced a new way of understanding the information exchange between the system and the environment through the quantum loss , which quantifies the amount of residual information that the environment and the system have in common after the interaction .we have also shown how measuring the information flow and thus non - markovianity via quantum loss is in fact equivalent to utilizing the lfs measure of non - markovianity .this equivalence gives a straightforward information theoretic interpretation to the lfs measure .moreover , recognizing that using the entanglement - based rhp measure is equivalent to the accessible information approach , we have provided an alternative way of quantifying the exchange of information between the system and the environment .more important , we have also revealed a clear connection between two apparently unrelated measures of non - markovianity , namely the lfs and the rhp measures , by making use of the link between the quantum loss and the accessible information . in particular , the only conceptual difference between these two quantities lies on the local accessibility of the information shared between the system and the environment , when local observations are performed on the environment .we have studied the information exchange in terms of the quantum loss and the accessible information in two paradigmatic models , namely for the zero and finite temperature relaxation processes .for the zero temperature case , we have demonstrated that both the quantum loss and the accessible information are able to capture the flow of information in a similar way .moreover , we have provided an experimental simulation of this process using an all optical setup that allows full access to the environment .our experimental results are shown to be in good agreement with the theoretical predictions .for the finite temperature relaxation model , we have explored the similarities and the differences of measuring information flow in terms of the trace distance , the quantum loss and the accessible information .specifically , we have shown that , while the trace distance fails to capture the inverse flow of information originated from the non - unital part of the dynamical quantum map , both the quantum loss and the accessible information can successfully identify the exchange of information between the system and the environment in this case . on the other hand , we have also found that , despite their conceptual similarities , it is possible for the quantum loss ( the lfs measure ) and the accessible information ( the rhp measure ) to disagree on the flow of information in certain time intervals during the time evolution .breuer and f. petruccione , _ the theory of open quantum systems _( oxford university press , oxford , 2007 ) r. alicki and k. lendi , _ quantum dynamical semigroups and applications _ ( springer , berlin , 2007 ) .rivas and s. f. huelga , _ open quantum systems , an intorduction _( springer , heidelberg , 2012 ) .j. piilo , s. maniscalco , k. harkonen , and k .- a .suominen , phys .. lett . * 100 * , 180402 ( 2008 ) .r. vasile , s. olivares , m. g. a. paris , and s. maniscalco , phys .a * 83 * , 042321 ( 2011 ) ; s. f. huelga , .rivas , and m. b. plenio , phys .lett . * 108 * , 160402 ( 2012 ) ; a. w. chin , s. f. huelga , and m. b. plenio , phys .rev . lett . * 109 * , 233601 ( 2012 ) ; b. bylicka , d. chruciski , s. maniscalco , sci .textbf4 , 5720 ( 2014 ) ; e .-laine , h .-breuer , and j. piilo , sci .rep . * 4 * , 4620 ( 2014 ) .b. bellomo , r. lo franco , and g. compagno , phys .99 * , 160502 ( 2007 ) ; m.m .wolf , j. eisert , t.s .cubitt , j.i .cirac , phys .* 101 * , 150402 ( 2008 ) ; d. chruciski and a. kossakowski , phys .lett . * 104 * , 070406 ( 2010 ) ; b .- h .et al . _ , nat* 7 * , 931 ( 2011 ) ; d. chruciski , a. kossakowski , .rivas , phys .a * 83 * , 052128 ( 2011 ) ; b. vacchini _et al . _ , new j. phys . * 13 * , 093004 ( 2011 ) ; j .- s .et al . _ ,. lett . * 97 * , 10002 ( 2012 ) ; a. chiuri __ , sci . rep .* 2 * , 968 ( 2012 ) ; f. benatti , r. floreanini , and s. olivares , phys . lett .a * 376 * , 2951 ( 2012 ) ; a.m. souza _ et al . _ ,arxiv:1308.5761 ; j .- s .et al . _ , nat. commun . * 4 * , 2851 ( 2013 ) ; f. f. fanchini , g. karpat , l. k. castelano , and d. z. rossatto , phys .a * 88 * , 012105 ( 2013 ) ; r. lo franco , b. bellomo , s. maniscalco , and g. compagno , int .b * 27 * , 1345053 ( 2013 ) ; a. darrigo , g. benenti , r. lo franco , g. falci , and e. paladino , arxiv:1402.1948 ; d. chruciski and s. maniscalco , phys .lett . * 112 * , 120404 ( 2014 ) ; d. chruciski and f. a. wudarski , arxiv : 1408.1792 ; c. addis , b. bylicka , d. chruciski , and s. maniscalco , arxiv : 1402.4975 ; .rivas , s. f. huelga , and m. b. plenio , rep .. phys . * 77 * , 094001 ( 2014 ) ; p. haikka and s. maniscalco , arxiv : 1403.2156 .rivas , s. f. huelga , m. b. plenio , phys .lett . * 105 * , 050403 ( 2010 ) .s. c. hou , x. x. yi , s. x. yu , and c. h. oh , phys . rev .a * 83 * , 062112 ( 2011 ) .breuer , e .-laine , j. piilo , phys .rev . lett . * 103 * , 210401 ( 2009 ) .s. luo , s. fu , and h. song , phys .a * 86 * , 044101 ( 2012 ) .a. k. rajagopal , a. r. usha devi , and r. w. rendell , phys .a * 82 * , 042107 ( 2010 ) ; x .- m .lu , x. wang , c. p. sun , phys .a * 82 * , 042103 ( 2010 ) ; s. lorenzo , f. plastina , and m. paternostro , phys . rev . a * 88 * , 020102(r ) ( 2013 ) ; b. bylicka , d. chruciski , s. maniscalco , sci . rep . * 4 * , 5720 ( 2014 ) . m. b. plenio and s. virmani , quant .inf . comp .* 7 * , 1 ( 2007 ) .f. f. fanchini , g. karpat , b. akmak , l. k. castelano , g. h. aguilar , o. jimnez faras , s. p. walborn , p. h. souto riberio , and m. c. de oliveira .* 112 * , 210402 ( 2014 ) .w. h. zurek , phys .d * 24 * , 1516 ( 1981 ) ; w. h. zurek , _ ibid _ * 26 * , 1862 ( 1982 ) ; w. h. zurek , rev . mod* 75 * , 715 ( 2003 ) .n. j. cerf and c. adami , phys . rev .lett . * 79 * , 5194 ( 1994 ). o. j. faras _et al . _ ,* 109 * , 150403 ( 2012 ) . v. gorini , a. kossakowski , and e. c. g. sudarshan , j. math .* 17 * , 821 ( 1976 ) ; g. lindblad , commun .* 48 * , 119 ( 1976 ) .laine , j. piilo , and h .-breuer , phys .a * 81 * , 062115 ( 2010 ) ; h .-breuer , j. phys .b : at . mol .. phys . * 45 * , 154001 ( 2012 ) .j. liu , x .-lu , and x. wang , phys .a * 87 * , 042103 ( 2013 ) .l. henderson and v. vedral , j. phys .a * 34 * , 6899 ( 2001 ) .h. ollivier and w. h. zurek , phys .* 88 * , 017901 ( 2001 ) .m. koashi and a. winter , phys .a * 69 * , 022309 ( 2004 ) . c. h. bennett , d. p. divincenzo , j. a. smolin , and w. k. wootters , physa * 54 * , 3824 ( 1996 ) .p. g. kwiat , e. waks , a. g. white , i. applebaum , and p. h. eberhard , phys .a * 60 * , r773(r ) ( 1999 ) .
|
exchange of information between a quantum system and its surrounding environment plays a fundamental role in the study of the dynamics of open quantum systems . here we discuss the role of the information exchange in the non - markovian behavior of dynamical quantum processes following the decoherence approach , where we consider a quantum system that is initially correlated with its measurement apparatus , which in turn interacts with the environment . we introduce a new way of looking at the information exchange between the system and environment using the quantum loss , which is shown to be closely related to the measure of non - markovianity based on the quantum mutual information . we also extend the results of [ phys . rev . lett . 112 , 210402 ( 2014 ) ] by fanchini et al . in several directions , providing a more detailed investigation of the use of the accessible information for quantifying the backflow of information from the environment to the system . moreover , we reveal a clear conceptual relation between the entanglement and mutual information based measures of non - markovianity in terms of the quantum loss and accessible information . we compare different ways of studying the information flow in two theoretical examples . we also present experimental results on the investigation of the quantum loss and accessible information for a two - level system undergoing a zero temperature amplitude damping process . we use an optical approach that allows full access to the state of the environment .
|
nature is full of randomness , from nucleus to whole galaxy , from inorganism to organism and from the domain of science and technology to the human society .although the mechanisms of randomness are different from one field to another , the ways to describe them are similar .the stochastic differential equation ( sde ) is a good approach to describe the randomness .the earliest work on sdes was done to describe brownian motion in einstein s famous paper and at the same time by smoluchowski .later it and stratonovich put sdes on a more solid theoretical foundation . in 1908 , a french physicist , paul langevin , developed an equation called the langevin equation ( le ) thereafter , which incorporated a random force into the newton equation , to describe the brownian motion .langevin equation is an equation to mechanics using simplified models and using sdes to account for omitted degrees of freedom .there are many branches with rich contents which have been derived in the last 100 years .for example , the reaction kinetic dynamics in chemistry , the molecular motor and protein folding in biology , the intracellular and intercellular calcium signaling , quantum brownian motion and the stochastic quantization in physics , even the stock market fluctuations and crashes are all related to the langevin equation .the langevin equation plays an important role in modern science , however only a few of them can be analytically solved , thus it is necessary to develop a numerical algorithm which incorporates both the computation efficiency and accuracy .the general structure of the stochastic differential equation is where , are the deterministic part of the equations of motion , are the diffusion coefficients and are a set of independent gaussian random variables with correlation function to get a certain order algorithm for the sde , we can directly do the stochastic taylor expansion of eq.(1 ) to our desired accuracy .this method is very explicit and suits for many cases of the sdes , however , since this expansion is too laborious to generate to high orders , we need to find a systematic pattern to overcome such difficulty . in this paper, we use the bicolour rooted tree method ( brt ) based on the stochastic taylor expansion to obtain the high order algorithm for sdes systematically . in the field of numerical method for solving ordinary differential equations , j. c. butcher develops a rooted tree method which relates each term in the ordinary taylor expansion to a rooted tree .his method can excellently make the laborious ordinary taylor expansion systematic in a heuristic way .then k. burrage and p. m. burrage expand the rooted - tree method to the bicolour rooted tree method which relates each term in the stochastic taylor expansion to a bicolour rooted tree for the sake of solving sde .they give an explicit runge - kutta method of order 2.5 in their paper for the sde . in this paper, we further develop their works , propose a new type of the bicolour rooted tree method , and apply it to the le case . since the intricacy of the numerical method for sde , the order of it is heretofore not great than 2.5 .but for some special kinds of the sde , for example , the langevin equation , a high order algorithm can be acquired .hershkovitz has developed a fourth order algorithm for the le , which is based on the stochastic taylor expansion . in this paper, we use the brt method to improve the accuracy to order 7 of deterministic part and order 4.5 of stochastic part ( _ o_(7,4.5 ) ) . in section 2, we briefly introduce the brt method and explore the relation between the terms in the stochastic taylor expansion and the bicolour rooted trees .we find that the stochastic taylor expansion is just equal to the sum of all the non - isomorphic bicolour rooted trees . in section 3 , due to the structure of le , we can use the brt method to obtain our algorithm _o_(7,4.5 ) for the le . in section 4 ,we use two examples to verify the validity and demonstrate the versatility of our algorithm .the first one is the energy relaxation in the double well .we compare our results with the previous results obtained by other algorithms and show the convergence of these different algorithms .the second one we present an algorithm for the time - dependent langevin equation with the ornstein - uhlenbeck noise , and our results are readily agreed with the previous ones .to cope with the intricacy of the taylor expansion of sde , a method which is called bicolour rooted tree method ( brt ) based on the rooted tree method developed by j. c. butcher is introduced to conveniently do the stochastic taylor expansion of sde .let us firstly transform the eq.(1 ) into the following equation , where is the wiener process , and the symbol implies that the sde considered in this paper is in the stratonovich sense , for the stratonovich integral satisfies the usual rules of calculus .one can therefore integrate eq.(3 ) from 0 to h , taylor expansion of the functions gives , now taking the last two equations into eq.(4 ) , one can easily get where and the repeated indices except ( the number of the equations ) imply the einstein s summation convention throughout the paper . then the terms with 0th derivative in eq.(7 )are , where and , so , substituting it for eq.(7 ) gives the 1st derivative terms , where is the stratonovich multiple integral , and the integration is with respect to ds if or if , for example , replacing eq.(7 ) by , one can get the 2nd derivative terms and performing this procedure recursively will generate all the derivative terms in principle .however , close calculation of these derivative terms reveals that the complexity will increase drastically as the order rises .for this reason , we adopt the brt method developed by j. c. butcher and p. m. burrage to express each derivative term systematically and graphically .we will first introduce some useful notations .take the bicolour rooted tree _ t _ in fig.1 as an example , the tree has 8 vertices , each vertex can be colored by white node ( ) or black node ( ) which is the representative of stochastic node ( ) or deterministic node ( ) . if are bicolour rooted trees , then ] . to conveniently calculate the weight of this tree ,we define the following terms : the _ degree _ of the vertex in the brt is equivalent to the degree of vertex in the graph theory except the root 1 with .s is the _ symmetry factor _ of the tree _t _ , for example , the trees interchanged the branches joint to vertex 1 or 3 or 5 are regarded as identical with tree _ t _ ,therefore the symmetry factor of tree _t _ is . then tracing the stochastic taylor expansion of the eq.(7 ) , we find that the _ elementary weight _ of the tree , which is also the coefficient of each term in the expansion , is where n is the total number of vertex in the tree and vertex .now we introduce the _ elementary derivative _ and _ elementary integral _ here .an elementary derivative can be associated with a brt such that , t=[t_{1},\cdots , t_{m}]\\ g^{(m)}[f(t_{1}),\cdots , f(t_{m } ) ] , t=\{t_{1},\cdots , t_{m}\ } , \end{cases } \end{split}\ ] ] and the elementary integral can be written as \\ \int_{0}^{s}\circ dw_{i}(u)(\prod\limits_{j=1}^{m}\theta_{u}(t_{j})),t=\{t_{1},\cdots , t_{m}\}. \end{cases } \end{split}\ ] ] fig.1(a ) illustrates the elementary weight , derivative and integral graphically . , derivative and integral , respectively .fig.1(b ) shows the 0th and 1st derivative terms of stochastic taylor expansion of by the brt method.,width=377 ] therefore the stochastic taylor expansion is given by where t is the set of non - isomorphic bicolour rooted trees .fig.1(b ) illustrates how to use this formula to express .due to the complexity of stochastic taylor expansion , we only consider langevin equation ( le ) which plays an important part in the fields involving randomness in this paper .an n dimensional set of coupled les has the form where is the external potential , is a set of friction parameters , is random noise with zero mean , the correlation relation is and the hamilton canonical equations are the form of eq.(17 ) where only every second equation has a noise term with constant diffusion coefficient , as well as the potential is unrelated to , makes it possible to sharply decrease the complexity of eq.(14 ) so as to obtain a high order algorithm for le . for the eq.(17 ) , we can translate it into the form where is equal to for odd _i _ and to for even _, , is a set of constants with if _i _ is odd number . because of the property of , one can find that only if _ j _ and _ k _ are both odd numbers . from above properties ,a key property for the simplification of the stochastic taylor expansion , that is , , can be found .we can rewrite it in the compact form as follow : ,\cdots=0,\ ] ] so if a bicolour rooted tree has this structure , it should have no contribution to the stochastic taylor expansion . from the analysis above , one can obtain a general method for solving the langevin equation numerically . if we want to get a numerical method to the order _o(m , n ) _ , we should : ( a)for the deterministic part : solve it by the standard runge - kutta method of order m. ( b)for the stochastic part : ( i)write down all the non - isomorphic bicolour rooted trees that can avoid the structure ( 19 ) ; ( ii)attach each vertex with white or black so as to make the tree have order n. ( c)add up the results of ( a ) and ( b ) . using these three criteria , all the terms up to order _o_(7,4.5 ) are , +[[\sigma]]+[[[\sigma]]]+[\tau,[\sigma]]\\ & + [ [ [ [ \sigma]]]]+[\tau,[[\sigma]]]+[[\tau],[\sigma]]\\ & + [ \tau,\tau,[\sigma]]+[[\tau,[\sigma]]]+[[\sigma],[\sigma]],\\ \delta x_{i}(h)=&\delta x_{idet}(h)+\delta x_{iran}(h).\\ \end{split}\ ] ] where the are the brts with deterministic nodes only which are identical to the terms of standard runge - kutta method for odes .these terms compose the deterministic part of our algorithm , and we can use runge - kutta method to solve the deterministic part numerically . then we try to find a way to calculate the stochastic part in eq.(20 ) .we here introduce a method to approximate the elementary integral which is developed by p. e. kloeden and e. platen .they showed that if is an n - dimensional wiener process on the time interval $ ] , the componentwise fourier expansion of the brownian bridge process is where are distributed and pairwise independent , then setting in the equation ( 21 ) gives .now , we begin to calculate the stochastic part of eq.(20 ) .firstly , let us set the , then use equation ( 21 ) , we can easily find that &=f_{j}^{i}g^{j}j_{j0,h}=f_{j}^{i}g^{j}\int_{0}^{h}ds_{1}\int_{0}^{s_{1}}\circ dw_{j}(s_{2})\\ & = f_{j}^{i}g^{j}\int_{0}^{h}w_{j}(s_{1})ds_{1}\\ & = hf_{j}^{i}g^{j}(\frac{w_{j}(h)}{2}+\frac{a_{j}^{0}}{2})\\ & \equiv hf_{j}^{i}g^{j}\omega_{j}^{2},\\ \end{split}\ ] ] where is a set of independent gaussian random variables sampled from .similarly calculation gives &=h^{2}f_{j}^{i}f_{k}^{j}g^{k}\omega_{k}^{3}\\ [ [ [ \sigma]]]&=h^{3}f_{j}^{i}f_{k}^{j}f_{l}^{k}g^{l}\omega_{l}^{4}\\ [ \tau,[\sigma]]&=h^{3}f_{jk}^{i}f^{j}f_{l}^{k}g^{l}\omega_{l}^{5}\\ [ [ [ [ \sigma]]]]&=h^{4}f_{j}^{i}f_{k}^{j}f_{l}^{k}f_{m}^{l}g^{m}\omega_{m}^{6}\\ [ \tau,[[\sigma]]]&=h^{4}f_{jk}^{i}f^{j}f_{l}^{k}f_{m}^{l}g^{m}\omega_{m}^{7}\\ [ [ \tau],[\sigma]]&=h^{4}f_{jk}^{i}f_{l}^{j}f^{l}f_{m}^{k}g^{m}\omega_{m}^{8}\\ [ \tau,\tau,[\sigma]]&=\frac{h^{4}}{2}f_{jkl}^{i}f^{j}f^{k}f_{m}^{l}g^{m}\omega_{m}^{9}\\ [ [ \tau,[\sigma]]]&=h^{4}f_{j}^{i}f_{kl}^{j}f^{k}f_{m}^{l}g^{m}\omega_{m}^{10}\\ [ [ \sigma],[\sigma]]&=\frac{h^{3}}{2}f_{jk}^{i}f_{l}^{j}g^{l}f_{m}^{k}g^{m}\omega_{lm}^{1}\\ \end{split}\ ] ] where and the first non - gaussian random variable is we can find that there are only 5 independent variables among .let s choose as the independent variables , then now , the last procedure we should do is to determine the five gaussian random variables and the non - gaussian random variable .we truncate to the first term , that is , .since are independent and , we can see that let to be seven independent standard gaussian random variables , then use eq.(27 ) , we can get the brt method gives an algorithm for the langevin equation so long as we determine the deterministic and the stochastic part of eq.(20 ) respectively and add up each other .the deterministic part can be solved by the standard runge - kutta algorithm , and the stochastic part can be solved by the eqs.(22)-(23 ) .to verify the validity of our algorithm , the kramers equation will be considered as the severe test for our algorithm .the form is as follow : and is the gaussian random force obeying the fluctuation dissipation theorem our method implies that the algorithm for eqs.(29)-(30 ) is , where and are the results of evolving the equations in the period _ 0-h _ by the seventh - order runge - kutta algorithm which is used in the ode , and the stochastic part of eq.(31 ) is , ,\\ p_{ran}(q(t),p(t),h)=&\sqrt{2\gamma t}[(\omega_{2}^{1}-h\gamma\omega_{2}^{2}+h^{2}\gamma^{2}\omega_{2}^{3}-h^{3}\gamma^{3}\omega_{2}^{4}\\ + & h^{4}\gamma^{4}\omega_{2}^{6})+(-h^{2}\omega_{2}^{3}+2h^{3}\gamma\omega_{2}^{4}-3h^{4}\gamma^{2}\omega_{2}^{6})v^{''}\\ + & ( -h^{3}p(t)\omega_{2}^{5}+h^{4}\gamma p(t)\omega_{2}^{7}+h^{4}\gamma p(t)\omega_{2}^{8}\\ + & h^{4}\gamma p(t)\omega_{2}^{10})v^{'''}+h^{4}v^{''}v^{''}\omega_{2}^{6}+h^{4}v^{'}v^{'''}\omega_{2}^{8}\\ -&\frac{h^{4}}{2}v^{''''}p^{2}(t)\omega_{2}^{9}]-h^{3}\gamma tv^{'''}\omega_{22}^{1 } , \end{split}\ ] ] where and have been defined in the previous section .the double well potential in this example is , it has two minima located at and a potential barrier with the height between the two wells .the friction coefficient is set to 1 .the initial condition is chosen on the top of the barrier .the average is taken over 5000 realizations of the gaussian random force during the trajectory .fig.2 shows the result which is compared with the euler method and the heun method .we perform these three methods at t=0.05 and t=0.2 respectively .we find that the results of these three different methods are almost agreed .nevertheless , the step size of our method , heun method and euler method are 0.1 , 0.001 and 0.0001 , respectively .the kramers equation has been simulated extensively by many authors ( ref . and the references therein ) . as for the convergence ,we compare our algorithm with previous algorithms here .fig.(3 ) shows the convergence of the three algorithms for solving the kramers equation .it is evident that our algorithm diverges slowly than the other algorithms as the step size increases .is equal to 1 .the average is taken over 5000 realizations for all the algorithms .the solid line is our method with step size 0.1 , the dash line is the heun method with step size 0.001 , and the short dash line is the euler method with the step size 0.0001 .panel a shows the relaxation at high temperature t=0.2 and panel b shows the relaxation at low temperature t=0.05 , respectively.,width=377 ] is equal to 1 .the average is taken over 5000 realizations for all the algorithms .the solid line is our method with step size 0.1 , the dash line is the heun method with step size 0.001 , and the short dash line is the euler method with the step size 0.0001 .panel a shows the relaxation at high temperature t=0.2 and panel b shows the relaxation at low temperature t=0.05 , respectively.,width=377 ] is equal to 1 .e is the average energy at a certain temperature and h is the step size .panel a shows the convergence at high temperature t=0.2 and panel b shows the convergence at low temperature t=0.05 , respectively.,width=377 ] is equal to 1 .e is the average energy at a certain temperature and h is the step size .panel a shows the convergence at high temperature t=0.2 and panel b shows the convergence at low temperature t=0.05 , respectively.,width=377 ] stochastic resonance , which is originally developed to explain the ice ages , has spread well beyond physics and left its fingerprints in many other research areas , such as complex networks , biological systems , neuroscience and quantum computing .the governing equations in these very different fields are essentially langevin equation or its generalizations .we present an example of stochastic resonance in neuroscience to demonstrate our algorithm in the case of time - dependent langevin equation with the ornstein - uhlenbeck noise . an enlightening model in the neuronal dynamical systemsis the noise - driven bistable system whose equations can be described as follow : where is the ornstein - uhlenbeck noise with intensity d and the inverse of characteristic time , and the system is driven by an external periodic force with amplitude a and frequency . to use our algorithm to solve eq.(34 ) numerically, we should first transform it into , let , we can further simplify eq.(35 ) into a compact form , with then one can easily find that property ( 19 ) is held again . accordingly , the numerical method of eqs.(36)-(37 ) can be written as follow : where the deterministic part of eq.(38 ) accords with the ronge - kutta algorithm for the odes , and the stochastic part of eq.(38 ) is ,\\ x_{3ran}&(x(t),h)=\lambda\sqrt{2d}[(h\omega_{4}^{2}-h^{2}(\gamma+\lambda)\omega_{4}^{3}\\ & + h^{3}(\gamma^{2}+\gamma\lambda+\lambda^{2})\omega_{4}^{4}-h^{4}(\gamma^{3}+\gamma^{2}\lambda\\ & + \gamma\lambda^{2}+\lambda^{3})\omega_{4}^{6})+(-h^{3}\omega_{4}^{4}+h^{4}(2\gamma+\lambda)\omega_{4}^{6})v^{''}\\ & -h^{4}x_{3}(t)\omega_{4}^{7}v^{'''}],\\ x_{4ran}&(x(t),h)=\lambda\sqrt{2d}[\omega_{4}^{1}-h\lambda\omega_{4}^{2}+h^{2}\lambda^{2}\omega_{4}^{3}\\ & -h^{3}\lambda^{3}\omega_{4}^{4}+h^{4}\lambda^{4}\omega_{4}^{6}].\\ \end{split}\ ] ] the double well potential in this example is , and the periodic driven force s amplitude a and frequency are 0.03 and 0.01 respectively .we consider the relation between the amplitude of output of the system and the noise intensity d. the average is taken over realizations and the heun method is used as a comparison .we then compare our results with the model mentioned in : the theoretical result of in this model is , where .are 0.03 and 0.01 , respectively .the friction coefficient is equal to 1 .the amplitude of output is averaged over trajectories .the black line is our method with step size 0.1 and the red line is the heun method with step size 0.01 .both the two lines have the characteristic time 1 .the green line is our method with step size 0.1 and characteristic time 0.1 .the blue line is the theoretical result of eq.(41).,width=377 ] fig.(4 ) shows the results of our simulations .the black line and the red line are the simulations of our method with step size 0.1 and the heun method with step size 0.01 respectively , with the parameters and equal to 1 .we now see that the results of our method and the heun method are almost the same , however , the step size of our method is larger than the one used in heun method .the parameters of the green line is the same as the black line except , that is , the characteristic time is shorter , and in this condition , the ornstein - uhlenbeck noise is closer to the gaussian noise .we can find that the _ resonant noisy intensity _( the maximum of the line ) shifts left when we shorten the characteristic time .in other words , lengthening the characteristic time can enhance the noise resistance of the system .the blue line is the theoretical result of eq.(41 ) . since the influence of the inertia term , we can see that the amplitude of output of the theoretical result is greater than our numerical result as shown in the green line .we have proposed the bicolour rooted tree method to do the stochastic taylor expansion systematically .this method can be used to solve the stochastic differential equation numerically . in this paper, we focus on the langevin equation which is widely used in modern science . a high order algorithm _o_(7,4.5 ) is derived in this paper . comparing with other usual algorithms ,our method is advantageous in computational efficiency and accuracy .we present our method in the two examples . in the first example of energy relaxation in the double well, we show our method gives the same results as presented in other papers , and the convergence is better than the other algorithms . in the second example, we propose an algorithm for the time - dependent langevin equation with the ornstein - uhlenbeck noise , and the result of our algorithm is the same as the one obtained by heun method .it shows our algorithm is suitable for the langevin equation regardless of the time - dependence of the equation .however , the readers should note that we only provide the algorithm for eq.(17 ) which satisfies the property ( 19 ) since this property can drastically reduce the complexity of eq.(14 ) .for the other type of sdes that the property ( 19 ) can not be held , such as the hodgkin - huxley model in neuroscience , interested readers can design their own algorithms based on the eq.(14 ) .for the purpose of using our method in the more difficult situations , one can consider the case that the diffusion coefficients are variable .all in all , we have provided a systematic scheme for searching the high order algorithm for the sde and find that it can reduce drastically when deal with the langevin equation .the authors thank prof . yong zhang for useful discussions and dr .jigger cheh , shaoqiang yu for helping with the preparation of the paper .we also thank the anonymous referees for their helpful advice .this research was partly supported by national basic research program of china ( 973 program ) ( contract no .2007cb814800 ) and national natural science foundation of china ( contract no .10475067 ) .our research group has an integrator using the 7th order runge - kutta method , the readers can refer to these two papers for more details : j.r .dormand , p.j .prince , celest .18 ( 1978 ) 223 ; a. huta , v. penjak , appl . math . 29( 1984 ) 411 .
|
stochastic differential equations , especially the one called langevin equation , play an important role in many fields of modern science . in this paper , we use the bicolour rooted tree method , which is based on the stochastic taylor expansion , to get the systematic pattern of the high order algorithm for langevin equation . we propose a popular test problem , which is related to the energy relaxation in the double well , to test the validity of our algorithm and compare our algorithm with other usually used algorithms in simulations . and we also consider the time - dependent langevin equation with the ornstein - uhlenbeck noise as our second example to demonstrate the versatility of our method . stochastic processes , langevin equation , numerical simulation 02.60.cb , 02.50.ey , 05.10.gg
|
the video depicts our experimental work in elucidating feeding mechanisms in insects .the first part of the video depicts feeding mechanics in common house fly _ ( musca domestica ) _ using standard video microscopy .several peculiar observations are made , including extensive gut movement and labirum pulsations . using a fluid droplet ( 0.1 m sucrose in water ) as a lens ,we are able to image the entrance point of the labirum ; usually closed by filtering flaps .though static morphology of the insect feeding apparatus is well known - the dynamics of how dilator muscles generate negative pressure is largely unknown .we employ adsorption x - ray microscopy for in - vivo imaging of internal dynamics of several insect species including fruit flies , common house flies and bumblebees - to compare the dynamics of feeding and dependence on various control parameters .x - ray imaging protocols were developed to achieve good contrast in adsorption mode .this allows for low - end x - ray sources to be used for such a task .second part of the video employs x - ray tomograpghy to accurately measure the 3d morphology of the feeding apparatus ( and the entire head ) with a imaging resolution of 10 microns .the third section depicts in - vivo feeding of live common house fly _( musca domestica ) _ over long durations of time .the pulsatile nature of the cabiral pump is apparent in live video microscopy .we employ micro - surgery techniques from neuro - physiology literature to build probes to perturb the function of the pump .this is done via pulled glass capillary inserted from the antennal plate and the flies are stabilized with pbs solution .experiments can be conducted for a period of 3 hrs with this adaptation .long term feeding rate and its dependence to viscosity is also measured .finally , we compare the pumping mechanisms in various species including fruit flies , blow flies , house flies and bumblebees .experimental tools developed for in - vivo imaging of insect feeding will also find applications in other insect physiology problems .
|
a large number of insect species feed primarily on a fluid diet . to do so , they must overcome the numerous challenges that arise in the design of high - efficiency , miniature pumps . although the morphology of insect feeding structures has been described for decades , their dynamics remain largely unknown even in the most well studied species ( e.g. fruit fly ) . here , in the fluid dynamics video , we demonstrate in - vivo imaging and microsurgery to elucidate the design principles of feeding structures of the common house fly . using high - resolution x - ray absorption microscopy , we record in - vivo flow of sucrose solutions through the body over many hours during fly feeding . borrowing from microsurgery techniques common in neurophysiology , we are able to perturb the pump to a stall position and thus evaluate function under load conditions . furthermore , fluid viscosity - dependent feedback is observed for optimal pump performance . as the gut of the fly starts to fill up , feedback from the stretch receptors in the cuticle dictates the effective flow rate . finally , via comparative analysis between the house fly , blow fly , fruit fly and bumble bees , we highlight the common design principles and the role of interfacial phenomena in feeding .
|
the structure of a given graph can range in complexity from being quite simple , having some regular features or small edge and vertex sets to being extremely complicated where basic characteristics of the graph are hard to obtain or estimate .such complicated structure is typical if for instance the graph represents some real network .an important problem therefore is whether it is possible to simplify or reduce a graph while maintaining its basic graph structure as well as some characteristic(s ) of the original graph .a related key question then is which characteristic(s ) to conserve while reducing a graph .studies of dynamical networks ( i.e. networks of interacting dynamical systems which could be cells , power stations , etc . ) reveal that an important characteristic of a network s structure is the spectrum of the network s adjacency matrix .with this in mind we present an approach which allows for the reduction of a general weighted digraph in such a way that the spectrum of the graph s ( weighted ) adjacency matrix is maintained up to some well defined and known set .we denote the class of graphs for which reductions are possible by which consists of the set of all finite weighted digraphs without parallel edges but possibly with loops having weights in the set of complex rational functions .a graph can therefore be written as the triple where and are the vertex and edge sets of respectively and .each such graph has an adjacency matrix with a well defined spectrum which we denote by . with this in place a graph reduction of be described as follows .given a specific subset which we call a _structural sets _ of , an _ isospectral reduction _ of over the vertex set is a weighted digraph ( see section 3 for the exact definitions ) .the main result of the paper is the following theorem ( see theorem [ theorem1 ] ) . + * theorem : * let and be a structural set of .then and differ at most by .+ the set is a finite set of complex numbers which is known and is the largest set by which and can differ . as a typical graph has many different structural sets it is possible to consider different isospectral reductions of the same graph .moreover , since a reduced graph is again a weighted digraph it is possible to consider sequences of such reductions .the flexibility of this process is reflected in the fact that for a typical graph it is possible to reduce to a graph on any nonempty subset of its original vertex set .that is , we may simplify the structure of to whatever degree we desire . from thisit follows that if is any nonempty subset of the vertices of then there are typically multiple ways to sequentially reduce to a graph on . as it turns out each such reduction results in the same graph independent of the particular sequence .this uniqueness result can be interpreted as the property that sequential reductions on are commutative .furthermore , the class of graphs which can be reduced via this method is very general .specifically , we may reduce those graphs in which consist of weighted digraphs without parallel edges .since undirected and unweighted graphs or graphs with parallel edges can be considered as weighted digraphs via some standard conventions then such graphs are automatically included as special cases of graphs that may be reduced by this procedure . because of the flexibility in reducing a graph the relation of having the same branch reduction is not an equivalence relation on the set as this relation is not transitive .however , it is possible to construct specific types of structural sets as well as rules for sequential reductions which do induce equivalence relations on the graphs in which we give examples of .also we note that generally , the tradeoff in reducing a graph is that although the graph structure becomes simpler the weights of edges become more complicated .therefore , we also consider graph reductions over fixed weight sets in which the weight set of the graph is maintained under this reduction while the vertex set is reduced .the structure of the paper is as follows . in sect .2 we present notation and some general definitions . sect .3 contains the description of the reduction procedure as well as some results on sequences of such reductions .some of the main results and techniques of the paper are then stated in sect .proofs of these statements are given in sect.5 . in sect .6 we study the relations between the strongly connected components of the graph and its reductions . sect . 7 considers reductions over fixed weight sets and sect .8 contains some concluding remarks .in what follows we formally consider the class of digraphs consisting of all finite weighted digraphs with or without loops having edge weights in the set of complex rational functions described below .we denote this class of graphs by .as previously mentioned , graphs or which are either undirected , unweighted , or have parallel edges can be considered as graphs in .this is done by making an undirected graph into a directed graph by orienting each of its edges in both directions .similarly , if is unweighted then it can be made weighted by giving each edge unit weight .also multiple edges between two vertices of may be considered as a single edge by adding the weights of the multiple edges and setting this to be the weight of this single equivalent edge .we will typically assume that the graph or use these conventions to make it so . by way of notation we let the digraph , possibly with loops , be the pair where and are the finite sets denoting the _ vertices _ and _ edges _ of respectively , the edges corresponding to ordered pairs for .furthermore , if is a _ weighted digraph _ with weights in then together with a function where is the _ weight _ of the edge for .we use the convention that if and only if . importantly , if then similar to digraphs we will denote this by writing . in order to describe the set of weights let ] such that and have no common factors and is nonzero .the set is then a field under addition and multiplication with the convention that common factors are removed when two elements are combined .that is , if then where the common factors of and are removed .similarly , in the product of and the common factors of and are removed .however , we may at times leave sums and the products of sums of elements in uncombined and therefore possibly unreduced but this is purely cosmetic since there is one reduced form for any element in . to introduce the spectrum associated to a graph having weights in we will use the following notation .if the vertex set of the graph is labeled then we denote the edge by .the matrix defined entrywise by is the _ weighted adjacency matrix _ of .we let the _ spectrum _ of a matrix with entries in be the solutions including multiplicities of the equation and for the graph we let denote the spectrum of .the spectrum of a matrix with entries in is therefore a generalization of the spectrum of a matrix with complex entries .moreover , the spectrum is a _ list _ of numbers .that is , where is the multiplicity of the solutions to equation ( [ eq1 ] ) , the number of distinct solutions , and the elements in the list . in what followswe may write a list as a set with multiplicities if this is more convenient .in this section we describe the main results of the paper .that is , we present a method which allows for the reduction a graph while maintaining the graph s spectrum up to some known set .we also give specific examples of this process notably using this method to reduce graphs associated with the laplacian matrix of a graph .some natural consequences and extensions of this process are also mentioned .here we first introduce some definitions as well as some terminology that allow us to be precise in our formulation of an isospectral reduction . in the following if where is the vertex set of a digraph let denote the complement of in .also , as is standard , a _ path _ in a digraph is a sequence of distinct vertices such that for and in the case that the vetices are distinct , except that , is a _cycle_. moreover , let the vertices of be the _ interior _ vertices of .[ def1 ] for let be the digraph with all loops removed .we say the nonempty vertex set is a _structural set _ of if induces no cycles in and for each , .we denote by the set of all structural sets of .eigen3.eps ( 25.5,-4) ( 4,10.5) ( 14,10.5) ( 75,10.5) ( 28.5,11.5) ( 28.5,2.5) ( 42,10.5) ( 93,10.5) ( 53,10.5) ( 83,12) ( 85,1.5) ( 65,7) ( 100.5,7) ( 82,-4) for with let be the set of paths or cycles from to in having no interior vertices in . furthermore , let we call the set the set of all _ branches _ of with respect to .[ branchprod ] let and for some .if for we define as the _ branch product _ of . if we define .[ reductiondef ] let with structural set .define to be the digraph such that if and we call the _ isospectral reduction _ of over . the graph since implying is as well .figure 1 gives an example of a reduction of the graph .we note here that all figures in this paper follow the aforementioned conventions that undirected and unweighted edges are assumed to be oriented in both directions and have unit weight . as another example of a graph reduction consider the complete undirected unweighted graph without loops on vertices .if then has structural sets each given by , . for each the graph has an adjacency matrix where for all . for the complete bipartite graph where is partitioned into the sets and having and verticesrespectively it follows that both .moreover , is the digraph with all possible edges including loops on vertices each having weight ( see figure 2 ) . in order to understand the extent to which the spectrum of a graph is maintained under different reductions we introduce the following .if is a structural set of the graph where let that is , is the set of for which there is some vertex off the structural set where or , as , the values of at which . as an example for the graph and structural set in figure 1 . bipartite.eps ( 12,-5.5) ( 1.5,4) ( 28,-4) ( 82,-5.5) ( 80,17) ( 101,17) ( 88.5,0) ( 88.5,35) if and then let be the list given by moreover , if it happens that then we say and differ at most by . the main result of this papercan then be phrased as follows .[ theorem1 ] let with .then and differ at most by .that is , the spectrum of and the spectrum of its reduction differ at most by elements of which justifies our use of the terminology isospectral reduction as the spectrum is preserved up to some known set .we note that two weighted digraphs , and are _ isomorphic _ if there is a bijection such that there is an edge in from to if and only if there is an edge between and in with .if the map exists it is called an _ isomorphism _ and we write .let .we say and have a reduction in common via the structural sets and respectively if , and .the fact that isomorphic graphs have the same spectrum with theorem [ theorem1 ] together imply the following .if have a reduction in common via the structural sets and respectively then and differ at most by .figure 3 gives an example of a reduction of in which where and are the graph and structural set respectively in figure 1 .therefore , the adjacency matrices of and in figures 1 and 3 respectively have the same spectrum up to . in this case one can compute , , and fig10.eps ( 14,-5) ( 0,-.5) ( 0,27) ( 26,27) ( 26,-0.5) ( 73,-5) ( 51.5,13) ( 66,18.5) ( 76,22) ( 78,3) ( 90,18.5) ( 101,13) we note that the representation of matrices with nonnegative entries by smaller matrices with polynomial entries has been used before ( see e.g. ) . however , the reason for doing so is different from our motivation in this paper . an alternate view of the graph reductions presented in section 3.1 is to consider the reduction process one in which the matrix is reduced to the matrix and view theorem [ theorem1 ] as a theorem about matrix reductions. viewed this way , an important application of theorem [ theorem1 ] is that one may reduce not only the graph but also the graphs associated with both the combinatorial laplacian matrix and the normalized laplacian matrix of . to make this precise ,let be an unweighted undirected graph without loops , i.e. a _simple graph_. if has vertex set and is the degree of vertex then its _ combinatorial laplacian matrix _ is given by on the other hand the _ normalized laplacian matrix _ of is defined as the interest in the eigenvalues of is that gives structural information about ( see ) . on the other hand knowing is useful in determining the behavior of algorithms on the graph among other things ( see ) .as every matrix with weights in has a unique weighted digraph associated to it then let be the graph with adjacency matrix and similarly let be the graph with adjacency matrix .since both and can be considered in via our conventions then either may be reduced .we summarize this as the following theorem which is a corollary to theorem [ theorem1 ] .suppose is a simple graph with vertex set .if is such that induces no cycles in then and and differ at most by . similarly , and and differ at most by example if is the complete graph on 3 vertices then the graph , shown in figure 4 , has the structural set since induces no cycles in . reducing over this set yields where , as can be verified , implying .laplacian.eps ( 8,-2.5) ( 40,-2.5) ( 77,-2.5) ( -1,4) ( 17,4) ( 5,16) ( 33.5,5) ( 50.5,5) ( 38,16) ( 34,10) ( 48.5,10) ( 41,3) ( 28,1) ( 57,1) ( 46.5,17.5) ( 78,4.5) ( 93,4.5) ( 82,10) ( 66.5,7) ( 100,7) one could generalize to any where has no loops and vertices by setting for and .this generalizes and is consistent with what is done for weighted digraphs in for example . as any reduction of a graph over the structural set is again a graph in it is natural to consider sequences of reductions on a graph as well as to what degree a graph can be reduced via such reductions . in order to address thiswe need to first extend our notation to an arbitrary sequence of reductions . for suppose such that , and if this is the case then we say _ induces a sequence of reductions _ on and we write for .moreover , we let the following is an immediate corollary of theorem [ theorem1 ] .suppose induces a sequence of reductions on the graph .then and differ at most by .for ] independent of the particular sets .it is important to mention that in theorem [ theorem-1 ] is any subset of vertices of the graph . in particular, may not be a structural set of .in this section we introduce a related but alternate way of describing the branch structure of a graph where this structure is again related to the graph s spectrum .we then use this structure along with a small number of graph transformations to give a constructive method for reducing any graph over any . for a given branch in the graph the _ weight sequence _ of be the sequence given by for a structural set of a digraph let be the _ branch decomposition _ of with respect to where this set includes multiplicities .note that can be written as lists but the formulation above is more convenient .[ def3 ] let .we say and have a common branch decomposition via the structural sets and respectively if , and there is a one - to - one map such that for example consider the unweighted digraphs and in figures 1 and 3 . for and these graphs have the same branch decomposition via the map given by and note that if two graphs and have a common branch decomposition with respect to the structural sets and respectively then there is some such that for all .this implies the following corollary of theorem [ theorem1 ] .let having a common branch decomposition via and respectively .then and and differ at most by .that is , common branch decompositions imply common reductions. one of the useful distinctions between branch reductions and decompositions is that the branch decomposition of a graph contains the lengths of the individual branches whereas the reduction does not .moreover , we note that in some ways branch decompositions are a more natural graph theoretic object since graphs with a common branch decomposition share the same weight set .it will be the fact that the branch decomposition of a graph retains the weight set and branch lengths that will allow us to prove theorem [ theorem1 ] . if and then two branches are said to be _ independent _ if they have no interior vertices in common .let be graphs with a common branch decomposition both with respect to the same set .then is a _ branch expansion _ of with respect to if any two are independent and each vertex of is on a branch of .note that and its branch expansion may have all vertices in common or share only the vertices in . * ( branch expansions)*[lemma0 ] let for .then an expansion of with respect to exists and and differ at most by .an example of a branch expansion is seen in figures 1 and 3 , being an expansion of over the set if the vertices of are relabeled via and .let be a weighted digraph where and .suppose for where .if we replace in by the two edges and loop with associated weights as in figure 5 then we call the resulting graph the graph with _ loop bisected edge _ with intermediate vertex .graphic2.eps ( 10,4) ( 0,-3) ( 54,-3) ( 43,-3) ( 98,-3) ( 76,-3) ( 75.5,7) ( 64,2) ( 85,2) [ lemma2]*(loop bisection ) * let where such that . if is the graph with loop bisected edge and then and the spectra , differ at most by . the goal now is to combine lemmas [ lemma0 ] and [ lemma2 ] to construct the reduction which can be done as follows . the first step in the reduction of over the structural set is to in fact do the opposite .that is , we first would like to find a branch expansion of .note that we can explicitly construct an expansion of via by taking each pair of vertices in and connecting them by independent branches with weight sets and multiplicities as specified in this decomposition .since each vertex of is by construction on some branch of then this is in fact an expansion of giving a proof to the existence claim in lemma [ lemma0 ] . for the next step in this reduction we use the lemma [ lemma2 ] to shorten the lengths of the independent branches in the expanded graph , if has weight set , then by lemma [ lemma2 ] we may modify this weight set to without effecting the spectrum of the graph by more than .if this is continued until is reduced to a single edge from to then has weight . if every branch of is contracted to a single edge in this way then , after the multiple edges are made single via our convention ,the resulting graph is the graph defined in definition [ reductiondef ] .moreover , as each step in this process does not change the spectrum of the graph by more than then the proof of theorem [ theorem1 ] follows once lemmas [ lemma0 ] and [ lemma2 ] are known to hold .before we prove the main results of this paper we note the following ( see for details ) .first , a directed graph is _ strongly connected _ if there is a path ( possibly of length zero ) from each vertex of the graph to every other vertex .the _ strongly connected components _ of are its maximal strongly connected subgraphs .moreover , its vertex set can always be labeled in such a way that has the following triangular block structure \ ] ] where is a strongly connected component of and are block matrices with possibly nonzero entries . as then , since edges between strongly connected components correspond to the entries in the block matrices below the diagonal blocks , these edges may be removed or their weights changed without effecting . moreover , as an edge of is in a strongly connected component of if and only if it is on some cycle then all edges belonging to no strongly connected components of can be removed without effecting . with this in mind , for the graph where and , we say an edge is not on any branch of if do not both belong to some for all .if is not on any branch of then the claim is that it can not be on a cycle unless the cycle is a loop . to see this note that every cycle which is not a loop must contain a vertex of for not to induce a cycle in , every cycle is either a single branch or the union of several branches in .this implies that all edges except for loops off the branch set may be removed without effecting the graph s spectrum . on the other hand ,suppose is a loop , having weight possibly equal to 0 , on the vertex where is not a vertex on any branch of .then this vertex may also be removed from without effecting the spectrum of the graph by more than .this follows from the fact in this case that the vertex is itself a strongly connected component of since , by the discussion above , it lies on no other cycle of .hence , if are the strongly connected components of where then therefore , removing from ( which removes ) changes at most by solutions to the equation or the such that is undefined , all of which are in .we record this as the following proposition .[ prop1 ] let and where are the set of vertices and edges respectively not on any branch of . if then the spectra and differ at most by . for ease of notationwe adopt the following . for a square matrix denote by {ij} ] be the determinant when row , column are omitted then row , column and so on .we now give a proof of lemma [ lemma0 ] .let where , and for . also let , and .from we construct the graph first by switching the edge to while maintaining its edge weight .second , if make the same as such that for all . let the resulting graph be the graph ( see figure 6 ) .brnchsep.eps ( 14,-5) ( 75,-5) ( 30,16) ( 87,16) ( 18,16) ( 75,16) ( 30,0) ( 91,0) ( -5,0) ( 53,0) ( 0,25) ( 24,25) ( 58,25) ( 92,25) if and the claim is that . to see this note that {31,22}=[\tilde{m}]_{11,32} ] for all since the first two rows in the associated matrices are identical . if for all then {11}+\omega_{32}[\tilde{m}]_{31}=\\ & ( \omega_{22}-\lambda)\big((\omega_{22}-\lambda)[\tilde{m}]_{11,22}+\sum_{i=4}^n(-1)^i \omega_{i2}[\tilde{m}]_{11,i2}\big)-\omega_{32}(\omega_{22}-\lambda)[\tilde{m}]_{31,22}=\\ & ( \omega_{22}-\lambda)\big((\omega_{22}-\lambda)[\tilde{m}]_{11,22}+\sum_{i=3}^n(-1)^i \omega_{i2}[\tilde{m}]_{11,i2}\big).\end{aligned}\ ] ] since {11,i2}=[m]_{11,i2} ] and {11,2i}=[m]_{11,2i} ] the equivalence class of graphs in having a common reduction with respect to their basic structural sets which contains the graph .this will be of use in section 7.1 .it is also important to note that other criteria induce an equivalence relation on or all that is required is some rule that results in a unique reduction or sequence of reductions of any graph in or respectively .this can be summarized as follows . *( uniqueness and equivalence relations ) * suppose for any graph in that is a rule that selects a unique nonempty subset .then induces an equivalence relation on the set where if \simeq\mathcal{r}_{\tau(w)}[h] ] is uniquely defined and this process may be repeated until all vertices of the resulting graph have the same out degree . as the vertex set of the graph resulting from this sequence of reductions is unique then the relation of having an isomorphic reduction via this rule induces an equivalence relation on . unlike the weight set used in this paper , it is more typical to consider weighted digraphs having weights in some subset of .the tradeoff then for considering is that although the graph structure is simpler the weights become rational functions . in some sense the weights begin to take on the shape of the characteristic polynomial of . on the other hand , if we wish to reduce the size of the graph , i.e. number of vertices , while maintaining its spectrum along with a particular set of edge weights the following is possible .[ theorem5 ] let be a unital subring and suppose has weights in .if and is the length of the longest branch in for all then there exists a graph with the following properties : + ( 1 ) ] .fig1.eps ( 10,-3) ( 0,0) ( 0,20) ( 20,20) ( 20,0) ( 71,-3) ( 40,10) ( 57,14) ( 71,20) ( 71,5) ( 85,14) ( 102,10) an example of this construction is the graph in figure 8 which is constructed from the graph ( in the same figure ) over the weight set . is a reduction over the weight set of in the sense that it has fewer vertices than the graph from which it is constructed .furthermore , one can compute and .a more complicated problem involves finding the graph $ ] with the least number of vertices where both and have weights in some set .the main results of this paper are concerned with the way in which the structure of a graph influences the spectrum of the graph s adjacency matrix and to what extent this spectrum can be maintained if this structure is simplified .for the most part these results give algorithmic methods whereby a graph can be reduced in size but do not mention if such methods might be useful in determining the spectrum of a given graph . as is shown in ,the graph reductions considered here do indeed help in the estimation of a graph s spectrum .for example , it is possible to extend the eigenvalue estimates given by the classical result of gershgorin to matrices with entries in .the main results of is that eigenvalue estimates via this extension improve as the graph is reduced .analogous results and extensions also hold for the work done by brauer , brualdi and varga original results are each improvements of gershgorin s .gershgorin s original result is in fact equivalent to a nonsingularity result for diagonally dominant matrices ( see theorem 1.4 ) which can be traced back to earlier work done by lvy , desplanques , minkowski , and hadamard .importantly , we note that via our method of graph reductions we obtain better estimates than those given by all of the previous existing methods .moreover , graph reductions can be used to obtain estimates of the spectrum of a matrix with increasing precision depending on how much one is willing to reduce the associated graph .if the graph is completely reduced the corresponding eigenvalue estimates give the exact spectum of the matrix along with some finite set of points .these techniques can furthermore be used for the estimation of spectra for combinatorial and normalized laplacian matrices as well as giving bounds on the spectral radius of a given matrix .in fact it is in such applications that the flexibility of isospectral graph reductions is particularly useful .the results of the present paper demonstrate various approaches to simplifying a graph s structure while maintaining its spectrum .therefore , these techniques can be used for optimal design , in the sense of structure simplicity of dynamical networks with prescribed dynamical properties ranging from synchronizability to chaoticity .we would like to thank c. kirst and m. timme for valuable comments and discussions .this work was partially supported by the nsf grant dms-0900945 and the humboldt foundation .
|
let g be an arbitrary finite weighted digraph with weights in the set of complex rational functions . a general procedure is proposed which allows for the reduction of g to a smaller graph with a less complicated structure having the same spectrum as of g ( up to some set known in advance ) . the proposed procedure has a lot of flexibility and could be used e.g. for design of networks with prescribed spectral and dynamical properties .
|
the main drift chamber(mdc ) is the center tracking detector of the beijing spectrometer iii(besiii) which is operating at the beijing electron positron collider ii(bepcii) . in order to meet the requirements of besiii experiment ,mdc is designed to be a small - cell , low - mass drift chamber using a helium - based gas mixture .the drift chamber contains an inner chamber which consists of 8 stereo layers with the drift cell in the size of about , and an out chamber which consists of 35 layers with the drift cell in the size of about .the inner chamber was designed to be replaceable in consideration of radiation damage .because of high beam related background , the inner chamber is suffering from aging problem after it has been running five years , and the gain is dropping year by year .for the first layer , the gain decreases by 26% from 2009 to 2013 .thus , it is necessary to make preparations for the upgrade of mdc inner chamber .the silicon pixel tracker(spt ) using cmos pixel sensors(cps) which is first developed by iphc for charged particle tracking is a good candidate because of the low material budget( ) , good spatial resolution( ) and high radiation tolerance . in order to do a monte carlo study on the expected performance of applying spt in besiii , a geant4-based full simulation and a track reconstruction softwarehave been developed in the besiii offline software system ( boss) . in this paperwe introduce the study of the tracking method combining spt and mdc outer chamber . expected performances including tracking efficiency , momentum resolution and vertex resolution from monte carlo studyare also presented .[ cols="^,^,^,^",options="header " , ] [ tab : sptmateiralbudget ]the inner drift chamber of mdc is preliminarily designed to be replaced by 3 layers of silicon pixel detectors at the radius of 72.58 mm , 86.16 mm and 99.5 mm , with the length of 380 mm , 450 mm and 520 mm arranged from the inner layer to the outer layer(figure [ fig : sptdisign ] ) .every layer includes ladders which are the basic modules of spt and fixed at mdc endplates .there is an overlap region of about 10% between two ladders in direction to eliminate the dead area .each ladder , with different components connected by glue , consists of mechanical supports made of carbon fiber , readout cable made from kapton coated with aluminum and cps chips on the top .the material budget of one layer shown in table [ tab : sptmateiralbudget ] is totally 0.38% radiation length , which is mainly caused by 500 thick carbon fiber support .the description of the geometry and materials of spt has been implemented in the geant4-based full simulation package in boss according to the preliminary design , and a simple digitization model is established to simulate the charge collection in the cps chips .for the current mdc tracking , the charged tracks are reconstructed by the pattern matching method combined with the conformal transformation method .however , these reconstruction methods ca nt work in spt when the new silicon inner tracker is applied in besiii .thus , new reconstruction methods have to be developed for the spt . in this study , a new method called combinatorial kalman filter(ckf) used to develop the track reconstruction software for spt .the procedure to reconstruct tracks in one event could be divided into three steps : * firstly , all possible track seeds will be found in one sub - detector * then , a set of track candidates will be built for each seed by extrapolate the seed into other sub - detectors layer by layer . * finally , the best candidate for each seed will be selected and seeds from the same track will also be merged .actually , there are two complementary sequences to reconstruct the track for the ckf method .one is the outside - in sequence which find seeds in mdc outer chamber and extrapolate the seeds into spt .it s implied that the spt is just used to update the tracks found in mdc outer chamber but not used to find tracks .thus , the track efficiency of this method will be lower than the full mdc .the other is the inside - out sequence which find seeds in spt and extrapolate the seeds into mdc outer chamber . in order to increase the track efficiency ,both of these two sequences are implemented in our software and the track candidates found by these two sequences separately will be merged by comparing the hits of the track candidates . the software is developed on the foundation of besiii track fitting algorithm , in which the basic kalman filter for mdc has been implemented .the first step in the outside - in track reconstruction is to find track seeds in the mdc outer chamber by the pattern matching method combined with the conformal transformation method .then these tracks will be extrapolated into spt to match the hits by iterating the following steps : * extrapolate all track candidates to next layer in the prediction step of kalman filter * look for compatible hits around the predicted point for each candidate * for every candidate , generate a branch for each compatible hit and update the branch with the hit .* drop bad candidates according to the number of missing hits , and so on .after all of the hits have been processed , the best candidate will be selected for each seed according to the number of hits and .figure [ fig : spt_inn_extr ] shows the iterative process of the outside - in sequence in the spt .track candidates in spt will be extended toward the interaction point layer by layer . in each layer ,the multiple - scattering and energy loss will be considered for each candidate .because there is a little bit of overlap region between two ladders in one layer , the created branch candidates will be tested whether intersect with other ladders in this layer .if another ladder intersects with the branch candidate , the iterative process will be invoked again .since the track efficiency of the outside - in sequence would be lower than the full mdc , a complementary inside - out sequence is necessary .the inside - out tracking algorithm starts from the track seeds found in spt .the iterative process of the inside - out algorithm is very similar with that of the outside - in algorithm except that the propagate direction is from spt to mdc . in order to obtain higher tracking efficiency ,the track seed coming from the colliding point consists of two compatible hits(hit pair ) at different layers in spt with a loose constraint of the beam spot . in the hit pairs finding ,all hits at outer layer with lager radius will be iterated . for each outerhit , the beam spot constraint and the minimum transverse momentum cut are applied to estimate a window in direction in the inner layer with a smaller radius .hit pairs will be created for each hit in the window of the inner layer .because 3 layers are included in spt , there will be 3 kinds of combination of layers in the hit pair finding .figure [ fig : hit_pair ] shows the hit pair finding in two layers .monte carlo single muon tracks are used to study the expected tracking performances , including momentum resolution , vertex resolution and tracking efficiency .we also generate samples to check the invariance mass distribution . in this section ,the results of current mdc is labeled as mdc and the results of applying spt inner tracker is labeled as spt for convenience .the momentum resolution is defined as , which is mainly influenced by the spatial resolution and the multiple scattering .figure [ fig : resp ] , which gives the momentum resolution as a function of the track momentum , shows the improvement of the momentum resolution in high momentum region after applying spt inner tracker . for 1gev / c tracks ,the momentum resolution is improved from 0.53% to 0.46% . however , in low momentum region , the improvement is very small because more material budget in spt results to more contribution from multiple scattering effect .the information of the vertex can be obtained from the track parameter and .we define the residual as the difference between the reconstructed value and mc truth value .the vertex resolution can be represented by and , which are obtained from fitting a gaussian to the residual distribution .the vertex resolution as a function of the momentum is shown in figure [ fig : resr ] and figure [ fig : resz ] it s clear that the vertex resolution of spt is much better than that of the current mdc , especially in z direction , because of the high spatial resolution of the silicon pixel tracker .the spatial resolution of mdc is about 120 in direction and about 3 mm in z direction , however , the spatial resolution of spt is about 10 both in and z direction .the tracking efficiency is defined as where is the number of charged monte carlo tracks . is the number of good reconstructed tracks .there are two criteria to determine whether a track is good reconstructed .the first is that more than 80% of the total found hits in the track are true hits .the second is more than 25% of the total true hits of the track should be found .the tracking efficiency as a function of the transverse momentum is shown in figure [ fig : trkeff ] . the tracking efficiency after applying the spt inner tracker is similar with the current mdc in the high transverse momentum region . however , in the low transverse momentum region , the tracking efficiency will be significantly improved after applying the spt inner tracker due to the high granularity and spatial resolution of the pixel detector . the typical decay channel of , is used to further verify the performance of spt .the invariant mass distributions of and the recoil mass distributions of are shown in figure [ fig : jpsimass ] and figure [ fig : recoilmass ] .the invariant mass resolution of obtained with spt is a little better than that of mdc , which is mainly due to the improvement of momentum resolution at the high momentum region .however , the recoil mass distribution of spt is not better than mdc because the momentum resolution for low momentum particle is not improved much due to multiple - scattering .the inner chamber of mdc at besiii is suffering from the aging problem due to the high beam related background .spt is a very powerful candidate because of the high spatial resolution and radiation tolerance . in order to study the expected performance of spt, the simulation of spt based on geant4 is implemented in boss and the track reconstruction software based on the combinatorial kalman filter is developed .the results of the monte carlo study show that the momentum resolution , vertex resolution and the tracking efficiency are significantly improved if spt inner tracker is applied .for instance , the momentum resolution for 1gev / c tracks can be improved from 0.53% to 0.46% . the vertex resolution in rdirection can be improved by 50% and in z direction can be improved from 0.16 cm to 0.013 cm at the momentum of 1 gev / c .
|
the inner drift chamber of the besiii is encountering serious aging problem after five year s running . for the first layer , the decrease in gas gain is about 26% from 2009 to 2013 . the upgrade of the inner tracking detector has become an urgent problem for the besiii experiment . an inner tracker using cmos pixel sensors is an important candidate because of its great advantages on spatial resolution and radiation hardness . in order to carry out a monte carlo study on the expected performance , a geant4-based full simulation for the silicon pixel detector has been implemented . the tracking method combining the silicon pixel inner tracker and outer drift chamber has been studied and a preliminary reconstruction software was developed . the monte carlo study shows that the performances including momentum resolution , vertex resolution and the tracking efficiency are significantly improved due to the good spatial resolution and moderate material budget of the silicon pixel detector . gbksong aging , silicon pixel detector , kalman filter , maps , track reconstruction , besiii drift chamber + 29.40.cs , 29.40.gx 2
|
the localization of vibrational energy in discrete nonlinear arrays has attracted a huge amount of interest in the past several decades as a possible mechanism for the efficient storage and transport of energy ( for recent reviews see refs . and references therein ) .more recently , the localization and transport of vibrational energy has been invoked in a number of specific physical settings including dna , hydrocarbon structures , the creation of vibrational intrinsic localized modes in anharmonic crystals , photonic crystal waveguides , and targeted energy transfer between donors and acceptors in biomolecules .discrete nonlinear arrays in thermal equilibrium can support a variety of stationary excitations ; away from equilibrium stationarity may turn into finite longevity , and additional excitations may arise .the possible excitations include phonons associated with linear portions of the potential , solitons ( long - wavelength excitations that persist from the continuum limit upon discretization ) , periodic breathers ( spatially localized time periodic excitations that persist from the anticontinuous limit upon coupling ) , and so - called chaotic breathers ( localized excitations that evolve chaotically ) .nonlinear excitations have been observed to arise ( spontaneously or by design ) and survive for a long time in numerical experiments , and they clearly play an important role in determining the global macroscopic properties of nonlinear extended systems . of particular interest to usis the dynamics of breathers , a term that we invoke rather loosely to denote an oscillatory excitation confined to a very small number of adjacent lattice sites .since our interest lies in breathers as possible storers and carriers of energy , we have concentrated on issues of longevity , and on lattices where breathers can move most easily .breathers are known to move more easily in nonlinear lattices with no on - site interactions , and so we have focused on lattices with nonlinear interactions .even more narrowly , herein we focus on the nonequilibrium dynamics and relaxation of breathers in a typical relaxation experiment where the surface of the system is connected to a cold ( usually zero temperature ) external thermal reservoir . we mostly ( but not exclusively ) study one dimensional arrays , for which the surface simply consists of the two end sites of a finite chain .we anticipate , and later detail , the following broad - brush description of the relaxation of a breather whose energy is well above that of phonon modes that may also be present in the nonlinear array .when the array boundaries are connected to a zero - temperature heat bath , the breather will of course eventually decay since the system must reach equilibrium at .in other words , there is _ necessarily _ leakage of energy out of the breather , although this process may in some cases be extremely slow .a determinant limiter of breather longevity is the extreme sensitivity to collisions with long wavelength phonons and with other localized excitations .such collisions invariably contribute to the rapid degradation or breakup of breathers into lower energy excitations .furthermore , collisions with other excitations tend to set breathers in motion , and motion in itself also contributes to energy leakage . while breathers tend to decay rapidly in the presence of long wavelength phonons and of other nonlinear excitations , and are in this sense fragile , _ isolated _ breathers tend to remain stationary and to decay extremely slowly and essentially exponentially over long time regimes , indicating a single slow rate - limiting dominant contribution to the intrinsic relaxation process .however , the particular values of decay rates are strongly sensitive to particular conditions and parameter values .these statements will be made more quantitative below .the organization of this paper is as follows .the model is presented in sec .[ model ] , and a summary of the relaxation of phonon modes in harmonic lattices is presented in sec [ linear ] . in sec .[ nonlinear ] we discuss the relaxation behavior of a purely anharmonic lattice ( no harmonic interactions ) , that is , a relaxation scenario that involves _ only _ nonlinear excitations and no phonons .section [ both ] deals with breather relaxation in arrays with both linear and nonlinear interactions , that is , lattices that support phonons as well as nonlinear excitations .finally , we present a summation of our findings in sec .[ concl ] .our model system in one dimension is described by the fermi - pasta - ulam ( fpu ) -hamiltonian where is the displacement of particle from its equilibrium position , is the number of sites , is the fpu potential and and are the harmonic and anharmonic force constants , respectively .the generalization to higher dimensions is obvious .the relative values of the two constants can be shifted by rescaling space and time . in particular , by introducing new variables and , where is a scaling constant , one finds that the scaled hamiltonian in the new variables is again of the form ( [ ham1 ] ) but with coupling constants and . the results are therefore related through appropriate scaling for any choice of coupling constants _ provided neither is zero_. to cover all possible combinations of coupling constants it is thus sufficient to consider only three distinct cases : ( harmonic ) , ( purely anharmonic ) , and ( mixed ) . throughout weassume free - end boundary conditions ( , ) , and note that although boundary conditions do not strongly affect equilibrium properties , they _ do _ affect relaxation dynamics .the equations of motion associated with the hamiltonian ( [ ham1 ] ) are . \label{langzerot}\ ] ] in our subsequent discussion we consider a variety of initial conditions , and observe the relaxation of the array to zero temperature when the boundary sites are connected to a zero - temperature environment by adding dissipation terms to the equations of motion of these sites . in one dimensionthe boundary sites are and .the equations of motion are integrated using a fourth order runge - kutta method with time interval .further reduction leads to no significant improvement .stability of the integration was checked for isolated arrays : the energy remains constant to 10 or more significant figures for all the cases and over all time ranges reported herein .in the absence of anharmonic interactions the excitations of the system are phonons whose behavior is well known .it is useful to briefly review this behavior because phonons may be present in the nonlinear system , and their presence strongly affects the relaxation behavior of nonlinear excitations .there are two informative measures to characterize the relaxation behavior of an array initially thermalized at temperature and then allowed to relax through the array boundaries into a zero temperature heat bath .one is the total array energy as a function of time , and the other is the time dependent spectrum .the total energy is defined as the sum over symmetrized site energies , e.g. in one dimension the time dependent spectrum is the fourier transform of the time dependent correlation function , where and is chosen for a desired frequency resolution .the choice ( corresponding to ) , turns out to be numerically convenient . in one dimensionthe time dependent correlation function is where is the relative displacement and .the correlation function is thus an average over the interval ] and $ ] vs to be straight lines over appropriately long time intervals . in fig .[ figure8 ] we clearly see this behavior , which extends over the entire simulation time interval for the higher amplitude excitation .the slope for the curve leads to a decay time of , a specific number reported here principally to stress its enormous magnitude compared to phonon relaxation times .the change in slope of the curve associated with the lower amplitude breather captures the slow change in the decay rate as the breather frequency edges toward the phonon band .here we also see clearly that the more energetic breather relaxes more slowly .a breather of a given amplitude has a characteristic predominant frequency . in fig .[ figure9 ] we show this frequency in relation to the phonon band edge as a function of time for various cases . for 31-site chains ,the frequency of the breather of initial amplitude decreases very little over the entire simulation , while that of initial amplitude decreases more markedly .consistent with the fact that the breather does not disappear entirely in the time range shown , its frequency never reaches the phonon band edge .if the initial amplitude of the excitation is sufficiently low , or the simulation time sufficiently long , or the chain sufficiently short , the breather is seen to disappear .this last case is illustrated in the figure for a breather of initial amplitude in a 21-site chain .the breather disintegrates entirely when its frequency reaches the phonon band edge .the inset shows , the ratio of the energy of the five sites centered on the breather to the total energy . is of order unity when most of the energy is localized on a small number of sites .note that the lifetime of this breather , which is of , is still much longer than the longest phonon lifetime , which is of .= 3.2 in the above results are typical of one particular set of conditions : a breather created exactly in the middle of a chain of sites whose ends are connected to a zero - temperature bath with dissipation parameter .it is interesting to explore the consequences of changing some of these conditions .we find that the dependence of the chain energy relaxation times on the initial amplitude of the breather is monotonic and decreases sharply with decreasing breather amplitude . over a simulation time of find that a breather of initial amplitude decays exponentially with a time constant . for amplitude find , and for the decay is no longer _strictly _ exponential , decreasing slightly from to over the course of the simulation .= 2.in = 2.in = 2.inthe evolution of the breather depends in an interesting way on its initial location and on the damping parameter when the latter is either very small or very large .figure [ figure10 ] shows the early evolution ( up to ) of an initially slightly off - center breather ( site 15 of a 31-site chain ) , for three values of the damping parameter .the middle panel is for , the damping we have considered so far .the behavior of the excitation in this panel starts out as we have described it , that is , it sheds some energy ( medium gray scale ) that dissipates quickly .although a small fraction of the energy that has been shed returns toward the breather , it is neither sufficient nor of the long wavelength variety to set it in motion ; most of the shed energy simply dissipates into the zero - temperature bath .the evolution of the breather proceeds much like that of a breather initially centered in the middle of the chain ( site 16 ) , with only a small modification of its decay time .this behavior is fairly robust for values of within an order of magnitude on either side of and for breathers that are excited not too near the chain ends .the situation is rather different if is either very small ( first panel ) or very large ( third panel ) . the qualitative similarity between these two extreme cases is apparent , and consistent with our discussion of high and low damping similarities in a purely harmonic chain ; the chain ends no longer effectively dissipate the energy that has been shed by the breather , and so it returns to perturb the breather and set it in motion . in turn , this causes the breather to decay more rapidly into more rapidly dissipated lower - energy excitations . in the very low damping case , energy that arrives at the chain ends can not go anywhere except back , much like a whip . in the very high casethe end sites are so damped that they can absorb very little energy from the rest of the chain , much like a wall .we have followed these particular histories over our usual time span of 3 million time units and find the decay times for , for , and for .the specific values change depending on the initial location of the breather and the values of the other parameters of the system , but the trend is clear .an odd parity breather initially centered _ exactly _ in the middle of the chain constitutes a singular case when damping is very low or very high , with relative decay rates _ opposite _ to those reported above .while the results are not much affected by the initial location of the excitation ( provided it is far from the chain ends ) , in this peculiar case the extreme cases lead to _ slower _ decay than for . in this uniquely symmetric case ,the breather is perturbed from both sides by _ identical _ energy pulses that return from the ends of the chain . in the absence of symmetry breaking , the breather is therefore not set in motion , and instead simply re - absorbs this energy ( and re - emits and re - absorbs energy in increasingly smaller amounts ) . since the energy that returns from the chain ends is greater in the extreme cases than it is for intermediate , the chain energy remains higher , and the decay is thus slower .breather decay times are strongly dependent on chain length : the decay times increase markedly , as does the total lifetime of the breather , with increasing because finite size effects and disturbances scattered back from chain boundaries are reduced .this is already apparent when one compares the and results in fig .[ figure9 ] . whereas a breather of initial amplitude created at the center of a 31-site chain has barely decayed over 3 million time units , a breather of the same initial amplitude in a 21-site chain has disintegrated completely well before that . with and for the centered breather we find for ( as reported above ) , for , and for . exponential decay points to a single rate - limiting decay channel for the energy .this channel is the shedding of energy in the form of phonons and/or lower energy localized excitations by the breather .the degradation of lower - energy localized excitations , and the dissipation of energy into the zero - temperature bath , are much faster processes . however , the relaxation rate associated with the shedding process is strongly dependent on chain length , breather location , and other system parameters . to tie together all the scenarios that we have presented in support of our picture of breather dynamics in mixed arrays , we add one more experiment " : we follow the dynamics of a breather injected into a relaxing chain _after _ the long wavelength phonons have decayed , but before the thermal relaxation process is complete .if our picture is correct , the breather lifetime should be much longer than that of one injected at time ( albeit perhaps shorter than that of the same breather injected in a zero temperature chain ) .we do indeed find that the breather stability improves dramatically .for example , for a zero - temperature injected breather of initial amplitude in a chain of 31 sites we reported above that over 3 million time units the relaxation time of the breather is . in a chaininitially thermalized at and then allowed to relax , if we wait until before injecting the same breather we find a somewhat shortened but still very long decay time of , in any case much longer than it would be if injected at .we end this section with a caveat : all the exponential and quasi - exponential slow decays reported for the various scenarios are for _ single _ realizations . in thermalized scenarios wherebreathers are created spontaneously ( but not necessarily in every realization ) , an ensemble average could lead to a time dependence of the array energy that may be complicated by the occurrence of a broad range of relaxation times . in the other scenarios , e.g. where breathers are injected manually , "a range of relaxation times might also occur in an ensemble if the location of the breather varies from one realization to another .we have studied the dynamics and relaxation of breathers in fermi - pasta - ulam arrays whose boundaries are connected through damping terms to a zero temperature heat bath .we find that breather dynamics and relaxation in these nonlinear arrays with quartic inter - particle interactions proceed along energetic pathways that are highly sensitive to the presence or absence of quadratic contributions to the interactions . to understand the role of quadratic interactions we have recalled that phonons in these arrays relax independently of one another ( provided the damping at the boundaries is not too strong ) , that the phonon relaxation time is wavevector dependent , and that phonons therefore relax sequentially , starting with the longest wavelengths for the free - end boundary conditions used in our work .we have also pointed out that breathers are fragile against collisions with long wavelength phonons and also with other localized nonlinear excitations .breathers are therefore quite robust in the absence of long wavelength phonons and of other nonlinear excitations , but are rapidly degraded in the presence of either .to arrive at these conclusions , and to investigate them more quantitatively ( at least numerically ) , we have performed a number of numerical experiments involving the spontaneous and the manual creation of breathers in arrays initially at finite temperatures and at zero temperature .breather decay brought about by collisions with long wavelength phonons and with other nonlinear excitations , and by the associated breather motion , is rapid , even more rapid than typical relaxation times of high frequency phonons .the actual process of breather disintegration caused by collisions and associated motion is one whereby the breather breaks up rapidly into lower energy excitations .these mechanisms of breather decay cause their lifetimes to be short in systems that contain such excitations .examples include thermalized purely anharmonic arrays that have no efficient way to eliminate their thermal excitations . even at zero temperature , a manually injected localized mode in a purely anharmonic array will ( perhaps after a prolonged period of stability ) eventually succumb rather suddenly and rapidly to the very perturbations produced during the relaxation process as the localized mode re - accommodates itself and/or the energy it sheds is reflected back by the system boundaries ( finite size effects ) .breather decay is also rapid if a breather is manually injected in a thermalized mixed array , mainly due to the effects of long wavelength phonons .since these phonons are highly destructive of breathers , breather degradation is observed even when the temperatures involved are extremely low . on the other hand ,breathers that are isolated from the effects of long wavelength phonons and of other nonlinear excitations persist for extremely long times .examples are spontaneous breathers that arise during thermal relaxation of a mixed array .long wavelength phonons as well as other nonlinear excitations that themselves decay into phonons are the first to relax , and spontaneous breathers make their appearance when the system has already been swept clean of these particular excitations .short wavelength phonons do not destroy breathers ; on the contrary , they tend to be absorbed by them and thus to contribute to their stability .the crucial importance of phonons and their ability to relax into the cold temperature heat bath ( especially the long wavelength phonons ) is thus evident : spontaneously created breathers in mixed arrays can persist because of the phonon dynamics , whereas the absence of phonons ( sonic vacuum ) and the consequent difficulty of purely anharmonic arrays to eliminate offending " excitations leads to the inability of spontaneously created breathers to persist .for the same reasons , manually injected breathers in mixed arrays can persist for a very long time if inserted in a zero temperature array , or in an array in which long wavelength phonons and other nonlinear excitations have already relaxed , but breathers will not survive if injected in a thermalized mixed chain , or even in a chain that is allowed to cool down to a very low but nonzero temperature , no matter how low the temperature .having established conditions that favor breather longevity ( mixed anharmonic chains at zero temperature or partially relaxed to zero temperature ) , we have noted that even these breathers must eventually relax through some intrinsic energy shedding process since the chain must eventually equilibrate to zero temperature .this intrinsic process is very slow and essentially exponential over very long periods of time , although some deviation from strict exponentiality is eventually observed because the relaxation time is amplitude dependent .thus , as the breather slowly loses energy / amplitude , this single " relaxation time necessarily decreases .this quasi - exponential decay continues until the breather frequency ( which decreases with decreasing amplitude ) approaches the phonon band , at which point the breather quickly breaks up into phonons that decay rapidly .the very slow quasi - exponential decay is indicative of essentially a single leakage channel .we have observed that the slow decay rates are dependent on system parameters , on breather location , and on breather amplitude .therefore , whereas single observations of energy relaxation of systems supporting one long - lived breather will lead to essentially exponential decay over many decades of time , ensemble averages might show more complex behavior .this is a specially strong caveat in experiments involving spontaneous breather creation and the associated possibility of realizations in which no breathers occur at all .this work was supported by the engineering research program of the office of basic energy sciences at the u. s. department of energy under grant no .de - fg03 - 86er13606 .support was also provided by a grant from the university of california institute for mxico and the united states ( uc mexus ) and the consejo nacional de ciencia y tecnologa de mxico ( conacyt ) .v. m. burlakov , s. a. kiselev , and v. n. pyrkov , phys .b * 42 * , 4921 ( 1990 ) ; r. dusi , g. viliani and m. wagner , phil . mag .b * 71 * , 597 ( 1995 ) ; phys .b * 54 * , 9809 ( 1996 ) ; y. a. kosevich , phys . rev .b * 47 * , 3138 ( 1993 ) .a. j. sievers and s. takeno , phys .lett . * 61 * , 970 ( 1988 ) ; j. b. page , phys . rev .b * 41 * , 7835 ( 1990 ) ; k. w. sandusky , j. b. page and k. e. schmidt , phys . rev .b * 46 * , 6161 ( 1992 ) ; t. dauxois and m. peyrard , phys .lett . * 70 * , 3935 ( 1993 ) ; s. aubry , physica d * 71 * , 196 ( 1994 ) ; r. s. mackay and s. aubry , nonlinearity * 7 * , 1623 ( 1994 ) ; d. cai , a. r. bishop and n. gronbech - jensen , phys . rev .e * 52 * , 5784 ( 1995 ) ; s. takeno and m. peyrard , physica d * 92 * , 140 ( 1996 ) .
|
breather stability and longevity in thermally relaxing nonlinear arrays depend sensitively on their interactions with other excitations . we review the relaxation of breathers in fermi - pasta - ulam arrays , with a specific focus on the different relaxation channels and their dependence on the interparticle interactions , dimensionality , initial condition , and system parameters . * breathers are highly localized oscillatory excitations in discrete nonlinear lattices that have been invoked as a possible way to store and transport vibrational energy in a large variety of physical and biophysical contexts . a particular scenario where the robustness and longevity of breathers has been a matter of considerable debate involves nonlinear arrays subject to thermal relaxation via the connection of surface sites to a cold environment . the important questions are these : can breathers ( created spontaneously or by design ) survive for a long time in such a relaxing environment ? if they can survive , can they move ? we detail answers to these questions , one of which is rather unequivocal : breathers that move do not live very long . so is another : breathers are quite robust when they do not move . the more complicated question then deals with the conditions that allow breathers to remain stationary and undisturbed for a long time in a relaxing environment . we detail some conditions that lead to this outcome , and others that definitely do not . *
|
artificial microparticles are increasingly applied in ceramics , paints , cosmetics , drug delivery , and several microbiological techniques such as microrheology .however , the negative environmental effects of polymer - based microparticles , for example through uptake by and high retention in marine organisms , are becoming increasingly clear . in order to be able to unravel the implications of microparticles on health and environment , it is crucial to understand their impact on cellular processes and especially their interactions with the cellular membrane , which is the most important protective barrier of the cell . the first step in the interaction of microparticles with living cellsis their adhesion to the membrane .this adhesion can be caused by a variety of mechanisms such as van der waals , coulomb or hydrophobic forces , or complementary protein interactions .subsequent internalization into the cell depends on the cell type , the particle size , and the particle surface moieties . as most research has focused on cellular particle uptake , littleis known about the organization and dynamics of particles that stay adhered to the membrane . using vesicles as model cells , membrane - associated microparticleshave been observed to exhibit a range of different behaviors , such as lateral diffusion , a long ranged attraction due to membrane deformation , aggregation , and crystal formation .however , a coherent picture describing these collective effects of microparticles on lipid membranes is still lacking . in order to categorize and systematically describe the multitude of observed interaction processes , we here employ a well - controlled model system of phospholipid giant unilamellar vesicles and microparticles that controllably interact via a complementary protein interaction .we study the microparticle adhesion on lipid membranes with confocal microscopy , which enables us to visualize the particle attachment , membrane deformation , and subsequent assembly pathways . with this system, we characterize three distinct states of the adhesion of individual spherical particles on membranes : attached , wrapped , and tubulated . by measuring the mobility of attached and wrapped particles, we establish that the size of the adhesion patch differs per particle .the state of individual particles determines the possible interactions between particles : wrapped particles interact via long ranged forces mediated by membrane deformation , attached particles can stick together via small adhesive vesicles , and wrapped and attached particles form dimers driven by the strong membrane - particle adhesion . these membrane - mediated assembly pathways in absence of any active processes point towards a general microparticle aggregation mechanism on cellular membranes .d - glucose ( 99% ) , sodium phosphate ( 99% ) , deuterium oxide ( 70% ) , ( 98% , sulfo - nhs ) , and ( 97% , bodipy ) were acquired from sigma - aldrich ; ( mpeg , m = 5000 ) from alfa aesar ; sodium chloride ( 99% ) and sodium azide ( 99% ) from acros organics ; ( 99% , edc ) from carl roth ; neutravidin ( avidin ) from thermo scientific ; dna oligonucleotides ( biotin-5-tttaatatta-3-cy3 ) from integrated dna technologies ; ( dopc ) , ( dope - peg - biotin ) , and ( ) from avanti polar lipids .all chemicals were used as received .deionized water is used with resistivity , obtained using a millipore filtration system ( milli - q gradient a10 ) .the here employed model system of giant unilamellar vesicles ( guvs ) and adhered microparticles builds on earlier work described in ref .guvs with diameters ranging from 5 to were prepared with electroformation in a glucose solution starting from a mixture of dopc , dope - peg - biotin , and dope - rhodamine . to separate the guvs from smaller lipid vesicles , we filter the solution using a whatmann pore size cellulosenitrate filter .fluorescent polystyrene microparticles ( diameter ) were synthesized in a surfactant - free radical polymerization . to ensure a specific and strong adhesion between the microparticles and the lipid membrane we coated the microparticles with avidin and mpeg , using an edc / sulfo - nhs coating procedure . per particles , neutravidin and mpeg were added .sodium azide was added to a concentration of to prevent bacterial growth .the density of biotin binding sites was quantified using a fluorescence assay with dna oligonucleotides having a biotin and a fluorescent marker .avidin is known to bind specifically to biotin , a process that occurs spontaneously when mixing of the guvs with microparticles in a phosphate buffered saline solution ( pbs , ph=7.4 ) that was density - matched to the particles using heavy water .the pbs had the same osmolarity as the glucose inside the guv .after an incubation time of , the guv - particle mixture was distributed into a microscope sample holder containing of pbs . in order to obtain tense vesicles ( )this sample was closed directly to prevent the evaporation of water . to obtain floppy vesicles ,the sample was imaged without closing the sample holder , allowing water to evaporate from the sample .evaporation increases the osmolarity of the outside solution without any osmotic shocks on the vesicles so that vesicles gradually deflate . for floppy vesicles ,the corresponding membrane tension was below , as confirmed by a membrane fluctuation analysis .as the membrane - particle linkage is essentially irreversible , the particles stay wrapped even when the membrane tension increases afterwards .diffusion measurements of membrane - attached particles were performed by imaging the top part of a tense guv with attached particles ( see figure s1 ) .the particle trajectories were extracted from the recorded image sequences using trackpy .simultaneously , the fluorescence signal of the top part of the vesicle was recorded .the center of the vesicle was extracted from a two - dimensional gaussian fit of this fluorescence and , using a separate measurement of the vesicle size , the full three - dimensional coordinates of the particles could be reconstructed . as membrane - wrapped particles tend to interact with other particles ( see results section ) ,diffusion measurements of these particles required continuous confirmation of the wrapped state .therefore , these measurements were performed on image sequences of the equatorial plane of a particle - covered guv ( see figure s2 ) .the particle coordinates were extracted in the same way as for the attached particles .the vesicle position was determined by fitting circles to the vesicle contour .together with a separate measurement of the vesicle size , this yielded the full three - dimensional coordinates of the particles .the diffusion coefficient was measured from the one - dimensional mean squared displacement along the vesicle contour .as the tracked particles could freely move out from the equatorial plane , this limited the length of the particle tracks and thus increased the uncertainty in the single - particle diffusion coefficients .particles that were closer than to other particles were omitted to rule out many - particle effects .the diffusion coefficient was measured for each particle separately with linear regression of the mean squared displacements .as particles were confined to a spherical surface , the mean squared displacement only grows linearly in time if the squared curvature is much larger than the measured mean squared displacement . in order to meet this condition , we limited the displacement measurements to short time intervals up to 4 frames , corresponding to maximum lag times of .samples were imaged with a nikon ti - e a1r confocal microscope equipped with a water immersion objective ( na = 1.2 ) .to enable fluorescence imaging , the particles were loaded with bodipy , and the guvs were doped with dope - rhodamine .the coverslips were coated with a layer of polyacrylamide to prevent particle adhesion .high - speed images used in the diffusion measurements were recorded at using a horizontal resonant scanning mirror , while high - resolution close - ups were recorded with a set of galvano scanning mirrors .polymer microparticles can adhere to lipid membranes by various kinds of ( bio)chemical interactions .in our experiments , we established adhesion in a controlled manner by an avidin - biotin interaction : the particles are coated with the protein avidin and the membranes include a biotinylated lipid .once connected to a membrane , particles do not detach due to the high energy associated with the non - covalent linkage between biotin and avidin ( approx . , with being the thermal energy ) , and the possibility of forming multiple connections ( see figure [ fig : wrapping ] ) .previously , it was established that the membrane fully wraps around the microparticle if the membrane - particle adhesion energy per unit area exceeds the cost for membrane bending and membrane tension . here denotes the particle radius and the membrane bending rigidity .we tune the adhesion state by adjusting the energy through the density of biotin binding sites on the particles . below a critical density of, the membrane does not deform under the influence of microparticle adhesion ( see figure [ fig : wrapping]_a - c _ ) . above the critical linker density and in the limit of low surface tension , , particles are fully wrapped and connected to the membrane only by a `` neck region '' ( see figure [ fig : wrapping]_d - f _ ) .the shape of this region is predicted to be catenoidal by the canham - helfrich model . at high membrane tension, particles do not induce membrane deformation , at least up to the optical resolution limit . in this article, we will investigate the dynamics of particles in the attached and the wrapped states and their assembly behavior .we will first probe the membrane adhesion of isolated particles by measuring the influence of the membrane on the particle mobility .then , we will map out the assembly pathways of multiple particles that lead to membrane - mediated configurations .finally , we will investigate spontaneous tubulation that occurs at particle inclusions . while a particle retains its lateral mobility when it adheres to a liquid lipid membrane , the brownian motion of the particle changes from three - dimensional to two - dimensional .therefore its mean squared displacement changes from in the freely dispersed case to in the bound case , where is the diffusion coefficient and denotes the lag time over which is measured . from the saffman - delbrck equations we expect that upon particle adhesion its diffusion coefficient is lowered significantly due to an additional drag force caused by the relatively high viscosity of the membrane .the particle mobility is then related to the effective patch size of the membrane that is forced to move along with the particle . for the diffusion measurements of attached particles , we used particles with biotin binding site densities ranging from to adhered to membranes with a surface tension above .this high choice of the membrane tension ensured that particles were not wrapped ( see figure [ fig : diffusion]_a _ ) .as a reference , we first measured the diffusion constant of freely suspended particles and found it to be .then , from 58 trajectories with an average length of 2290 frames , we obtained a distribution of the single - particle diffusion coefficients after membrane attachment , as shown in figure [ fig : diffusion]_c_. compared to freely suspended particles , the diffusion coefficient lowered to .furthermore , we observed a significant difference between individual particles .this spread in the diffusion coefficients is not due to a measurement imprecision , as we omitted single - particle diffusion coefficients with an estimated standard error above .thus , we conclude that the membrane imposes a drag that differs per particle .although we observed a spread in the distribution of single - particle diffusion coefficients , there was no apparent difference in the particle - membrane adhesion distinguishable in the confocal microscopy images . within the optical resolution limit ,all particles were observed not to deform the membrane , which is in accordance with the theoretical prediction when the membrane tension is larger than the membrane - particle adhesion per unit area .still , even without any membrane deformation , the binding patch may differ from particle to particle .the finite distance between the membrane and biotin gives rise to a binding patch diameter set by the linker length and the particle radius via .if we estimate to be , which is based on the size of a peg2000 spacer , the associated binding patch diameter is , as illustrated in figure [ fig : wrapping]_c_. within this potential binding area , multiple links can be formed without deforming the membrane , due to the flexibility of the peg spacer . a variation the concentration or spatial distribution of avidin on the particle , and thus the number of bound lipids that are forced to diffuse with the particle , likely leads to the observed spread in particle mobility . unfortunately , we can not infer a particle - bound membrane patch size using the saffman - delbrck equations from the observed mobility of membrane - attached particles , such as is appropriate for smaller membrane - adhered objects .the reason for this is that the motion of the particle is governed by both the bulk liquid and the membrane , which is apparent from the observation that the diffusivity of freely diffusing particles is on the same order of magnitude as the attached particles .we measured the diffusion of wrapped particles using a single batch of particles with biotin binding site densities between and ( see figure [ fig : diffusion]_d - e _ ) . to ensure a wrapped fraction of particles of 60% , we increased the available membrane surface area by lowering the tension of the guvs temporarily below .we measured 35 trajectories with an average length of 153 frames ; the maximum error in these measurements was .as can be seen in figure [ fig : diffusion]_f _ , again the membrane association lowered the diffusivity of the particles .however , the observed spread in the diffusion coefficients was even larger than the spread for attached particles : some particles exhibited a diffusion coefficient close to that of freely suspended particles , while others experienced up to a three - fold decrease .we presume that the reduced mobility of wrapped particles is caused by the neck region that connects particle and membrane , and not by the membrane region that is wrapped around the particle .although we can not observe any differences in this neck region between individual particles , there likely is a variation in the neck size that is below the resolution of the microscope ( approximately ) .we hypothesize that this neck size is determined by a non - adhesive patch on the particle ( see figure [ fig : wrapping]_f _ ) .this non - adhesive patch is likely caused by the finite number of particle - attached linkers : for the here employed maximum linker density of , the average distance between linkers is , whereas for the critical linker density for wrapping of this distance is .another reason for the presence of a non - adhesive patch could be the adsorption of small vesicles on the particle surface .this would locally occupy linkers which can subsequently not connect to the guv membrane anymore , thereby effectively creating a non - adhesive patch that also sets the particle - induced neck size . to summarize , membrane - attached particles do not deform the membrane , but still experience a varying membrane - imposed drag through variations in the number of membrane - particle linkages .wrapped particles also experience a varying drag , but now governed by the size of a non - adhesive patch on the particle , which may again be determined by the finite density of particle - attached linkers , or adsorption of small secondary lipid structures on the particle that inhibit a patch of linkers .therefore , a spread in the linker concentration , or the presence of small lipid vesicles , results in a variation in the microscopic nature of the membrane - particle linkage .the here described diffusion measurements revealed a significant spread in the particle dynamics , which may affect the timescales of the assembly pathways that are described in the next section .furthermore , the observed spread in the diffusivity of wrapped particles implies a variation in the particle - induced membrane deformation and thus may have consequences for the long ranged attraction between membrane - deforming particles that was quantified previously . in order to precisely measure the effect of the particle - induced deformation on particle mobility and interaction , an accurate control over the neck sizeis required .this might be achieved using asymmetric `` janus '' particles that have a controlled non - adhesive patch size .the two states of particle - membrane adhesion , attached to and wrapped by the membrane , potentially lead to different interactions . in previous experiments , observations of a long - ranged membrane deformation - mediated attraction have been described and later quantified , while also different phenomena such as permanent aggregation or crystallization have been reported .the underlying mechanisms have been conjectured to be a long - ranged membrane - bending mediated force and a short - ranged adhesion mediated force .however , a connection between the particle - membrane state , the involved type of force and resulting assembly pathway has not been established yet . in order to develop a coherent picture, we will here employ confocal microscopy to systematically study the interaction pathways between all combinations of single - particle adhesion states : ( 1 ) wrapped - wrapped , ( 2 ) attached - attached , and ( 3 ) wrapped - attached . two membrane - wrapped particles ( figure [ fig : pathways]_b _ ) exhibit a long ranged interaction that has been described extensively in theory and simulation , and only recently quantified by experiments . in these experiments , the interaction has been reported to be attractive with a strength ranging from to . despite the large difference in the absolute attraction strength , the observed interaction extends in both cases over several micrometers and has therefore been attributed to membrane deformation . on the other hand , membrane - attached particles ( figure [ fig : pathways]_a _ )have been shown to not interact with each other via a deformation - mediated interaction .next to these known assembly pathways , we observed a third possibility when a wrapped particle encountered an attached particle : the local deformation of the membrane induced by the wrapped particles can serve as a binding site for attached particles ( figure [ fig : pathways]_e _ and [ fig : dimer_render ] ) .this phenomenon was observed each time when an attached particle came in contact with a wrapped particle .attached particles were captured irreversibly due to the increased contact area on top of the wrapped particle ( see figure [ fig : dimer_render ] ) .although the membrane - particle linkages are mobile in the membrane , the attached particle can not escape once the linkages are formed in this ring - shaped region .this is because the formed ring of bound linkers would need to cross the central non - binding patch , which requires the breakage of linkages . in this way, the attached particle is topologically protected from detaching from the wrapped particle and diffuse together as a membrane - mediated dimer .rarely , we observed a two - step hierarchical wrapping : after formation of the membrane - mediated dimer , the previously attached particle becomes wrapped resulting in a tube - like structure containing two wrapped spheres ( see figure [ fig : pathways]_f _ ) .apparently , the biotin binding sites density on the second particle is large enough to ultimately become fully wrapped , but this only happened after the dimerization upon a further decrease in membrane tension .in addition to these assembly pathways that are determined by the adhesion state of the particles , we found that the presence of small ( ) lipid vesicles can also lead to permanent binding between particles ( figure [ fig : pathways]_d _ and s3 ) .these small vesicles can only be identified by their fluorescence in confocal images , while they are typically invisible in bright field microscopy .this irreversible aggregation has also been observed in previous measurements and hypothesized to originate from a membrane - deformation mediated force , although at that time the presence of small lipid vesicles could not be excluded .while membrane deformation might have induced the reported long - ranged attraction , it remains unclear whether the observed irreversible aggregation was due to membrane deformation or short - ranged adhesion .this aggregation pathway can be suppressed by removing the small lipid vesicles , using for instance filtration .contrary to the attached particles , fully wrapped particles can not aggregate via small lipid vesicles , purely for the reason that their surface is already occupied .thus , the short - ranged membrane - particle adhesion mechanism drives the lipid vesicle mediated interactions that result in aggregation of attached particles ( figure [ fig : pathways]_d _ ) as well as in the dimerization of attached and wrapped particles ( figure [ fig : pathways]_e _ ) .these adhesion - driven interactions continue to play a role as long as there is any area on the particle left that is not covered , such as is the case for the attached particles ( figure [ fig : pathways]_a _ and _ d _ ) or for the membrane - mediated dimer ( figure [ fig : pathways]_e _ ) . indeed , two dimers assembled into membrane - mediated tetramers ( figure [ fig : dimer_render]_c _ ) , and multiple attached particles aggregated via secondary lipid vesicles . depending on the particle concentration and waiting time after mixing , particles assembled into permanent aggregates of 5 - 50 particles ( see figure s3 ) .in summary , by using confocal microscopy we were able to distinguish a variety of assembled microparticle structures mediated purely by a lipid membrane , either through membrane deformations or the short - ranged adhesive forces between membrane and particles .we correlated the different observed structures with the initial state of wrapping of the individual particles and found three mechanisms for membrane - mediated interactions .firstly , membrane deformations give rise to long - ranged interactions . secondly , for a system consisting of non - wrapped particlesonly , particle aggregates are caused by small ( ) secondary lipid vesicles .thirdly , collisions between non - wrapped and wrapped particles lead to permanent membrane - mediated dimers due to the formation of a ring - like binding topology .as long as there is unoccupied area on particles , the strong membrane - particle adhesion force can drive subsequent aggregation of particles .this observation of aggregation on guvs proves that for the formation of membrane - mediated aggregates , no cytoskeleton or other active components are necessary : the adhesion between particle and membrane is sufficient for aggregation .after a particle had been wrapped by the lipid membrane , we occasionally observed spontaneous formation of a membrane tube starting from the neck of the wrapped particle onwards ( see figure [ fig : tubulation]_a _ and corresponding movie s1 ) .this process occurred for less than 1 % of all wrapped particles .the particles remained wrapped by and connected to the vesicle membrane , but could freely diffuse in the vesicle interior .the membrane tubes exhibited large thermal fluctuations and we never observed reversal of the tubulation process , which suggests the absence of tensile forces that would retract the particle . spontaneous formation of membrane tubes is an ubiquitous process in living cells .however , in our experiments this is only possible if the bilayer would have a preferred curvature . in the absence of such a preferred curvature, it has been established that a supporting force is necessary to stabilize membrane tubes against retraction , and that its radius is given by .taken together , this yields .since the the membrane diameter of our observed tubes is below the diffraction limit , , a tube - supporting tensile force would need to be larger than ( given ) .clearly , such a force is absent in our experiments , as the particle diffuses freely after tube - formation for the duration of the experiment .thus , we conclude that only a preferred curvature of the bilayer could explain the observed membrane tubes .such a preferred curvature might be imposed by the design of our specific model membrane : the bulky dope - peg - biotin lipids could be depleted locally from the outer membrane leaflet by binding to particles , thereby creating a leaflet asymmetry and preferred curvature that stabilizes the inward tube formation .however , given the here employed dope - peg - biotin molar fraction of 0.5% , this mechanism seems unlikely .the number of these functionalized lipids per unit area is approximately . on the one hand, this amount is too low to have any lateral steric interactions between the peg2000 polymers and thus steric pressure that may lead to an induced curvature . on the other hand ,this concentration is too high to induce a sufficient leaflet asymmetry due to the comparatively low binding site density on the particles which only depletes an insignificant fraction of dope - peg biotin lipids on one side of the bilayer .therefore we here hypothesize that the observed tube is in fact a rodlike micelle : the tube connecting the particle and the bilayer consists of a monolayer of lipid molecules instead of a bilayer ( see figure [ fig : tubulation](_b _ ) ) .this monolayer is formed from the inner leaflet of the vesicle after a spontaneous bilayer rearrangement in the neck region of a wrapped particle .the large energy barrier required for such a rearrangement of the bilayer is overcome by the high curvature of the neck region . after the formation of the micelle - like tube ,the overall membrane curvature is lowered and the neck region can be extended and further stabilized by the bulky headgroup of the wedge - shaped dope - peg - biotin and the free energy gain from the additional translational entropy of the particle .in addition , we did not observe any attraction between the tube origin on the guv and other membrane - attached particles such as is present for wrapped particles .this further suggests that these tubular structures do not deform the outer membrane indicating that the observed structures are rodlike micelles .we investigated the interaction of adhesive microparticles with lipid membranes and established three distinct particle - membrane configurations .initially , a particle attaches to the membrane without creating any visible deformation .subsequently , given that the membrane tension is sufficiently low and the adhesion energy is sufficiently high , the membrane almost fully wraps the particle resulting in a membrane - coated particle .such a wrapped particle is attached to the membrane via a narrow ( ) neck region .finally , this neck region may expand in a tube or rodlike micelle of multiple micrometers that connects the wrapped particle and the membrane .after adhesion to the membrane , the particles remain laterally mobile on the membrane .we have quantified the particles mobility and found that their diffusion coefficient of in dispersion is reduced to after membrane attachment . as there is a significant spread in the measured single - particle diffusion coefficients ,we conclude that particles attach with varying number of linkers . for wrapped particles , we observed diffusion coefficients in the range of , showing that the neck region connecting particle and membrane is variable in size .we hypothesize that the diameter of the neck region is set by a non - adhesive patch on the particle surface , whose size varies due to the distribution of linkers .membrane - associated particles can interact with each other in three ways .firstly , membrane - wrapped particles interact with each other through a membrane deformation mediated force that ranges over several particle diameters .secondly , this interaction is not present for attached particles that do not deform the membrane .however , these can irreversibly aggregate via smaller secondary vesicles .thirdly , attached particles can become trapped in the deformation sites created by a wrapped particle , forming membrane - mediated dimers .the last two adhesion - mediated mechanisms ultimately result in complex random aggregates of multiple particles .the observed aggregation of colloidal particles mediated by lipid membranes shows that even in the absence of proteins or active components , particles aggregate in order to optimize the contact area with the lipid membrane , or to minimize the membrane deformation .this systematic description of the assembly pathways of microparticles on model lipid membranes will enable a better understanding of the assembly of membrane - associated objects in biological systems , for example the accumulation of microplastics .d.j.k . and d.h .designed the project ; c.v.d.w . carried out experiments and data analysis supervised by d.j.k . and d.h .; all authors contributed to writing and revising the manuscript .the authors thank timon idema for useful discussions and marcel winter for support during particle synthesis . this work was supported by the netherlands organisation for scientific research ( nwo / ocw ) , as part of the frontiers of nanoscience program and veni grant 680 - 47 - 431 .10 f. tang , l. li , and d. chen . ., 24:15041534 , 2012 .y. tseng , t. p. kole , and d. wirtz . ., 83(6):31623176 , 2002 .m. cole , p. lindeque , c. halsband , and t. s. galloway . ., 62:25882597 , 2011 .a. e. nel , l. mdler , d. velegol , t. xia , e. m. v. hoek , p. somasundaran , f. klaessig , v. castranova , and m. thompson . ., 8:543557 , 2009 .m. mahmoudi , j. meng , x. xue , x. j. liang , m. rahman , c. pfeiffer , r. hartmann , p. r. gil , b. pelaz , w. j. parak , p. del pino , s. carregal - romero , a. g. kanaras , and s. tamil selvan . ., 32(4):679692 , 2014 .a. vonarbourg , c. passirani , p. saulnier , and j. p. benoit . ., 27(24):43564373 , 2006 .j. rejman , v. oberle , i. s. zuhorn , and d. hoekstra . . ,377:159169 , 2004 .m. r. lorenz , v. holzapfel , a. musyanovych , k. nothelfer , p. walther , h. frank , k. landfester , h. schrezenmeier , and v. mailnder . ., 27(14):28202828 , 2006 .i. koltover , j. o. rdler , and c. r. safinya . ., 82(9):19911994 , 1999 .l. ramos , t.c .lubensky , n. dan , p. nelson , and d. a. weitz . ., 286:23252328 , 1999 .r. sarfati and e. r. dufresne . ., 94:012604 , 2016 . c. van der wel , a. vahid , a. ari , t. idema , d. heinrich , and d. j. kraft ., 6:32825 , 2016 .y. tamba , h. terashima , and m. yamazaki . ., 164:351358 , 2011 .j. appel , s. akerboom , r. g. fokkink , and j. sprakel . ., 34:12841288 , 2013 .j. pcraux , h .-dbereiner , j. prost , j .- f .joanny , and p. bassereau . ., 13:277290 , 2004 .d. b. allan , t. a. caswell , n. c. keim , and c. van der wel . ., 34028 , nov 2015 . c. van der wel ., 47216 , 2016 .s. paquay and r. kusters . . ,110:12261233 , 2016 .v. t. moy , e .-florin , and h. e. gaub . ., 266:257259 , 1994 .r. lipowsky , h .-dbereiner , c. hiergeist , and v. indrani . ., 249:536543 , 1998 .r. lipowsky and h .-dbereiner . ., 43(2):219225 , 1998 .m. raatz , r. lipowsky , and t. r. weikl . ., 10:35703577 , 2014 .j. agudo - canalejo and r. lipowsky . ., 9(4):37043720 , 2015 .p. g. saffman and m. delbrck . ., 72(8):31113113 , 1975 .b. d. hughes , b. a. pailthorpe , and l. r. white . ., 110:349 , 1981 .h. qian , m. p. sheetz , and e. l. elson . ., 60:910921 , 1991 .d. ernst and j. khler . ., 15:845849 , 2013 . p.g. de gennes . ., 27:189209 , 1987 .a. k. kenworthy , k. hristova , d. needham , and t. j. mcintosh . ., 68(5):19211936 , 1995 .t. t. hormel , s. q. kurihara , m. k. brennan , m. c. wozniak , and r. parthasarathy . ., 112:188101 , 2014 .a. naji , a. j. levine , and p. a. pincus . . ,93(11):l49 , 2007 .h. a. stone and h. masoud . ., 781:494505 , 2015 .n. li , n. sharifi - mood , f. tu , d. lee , r. radhakrishnan , t. baumgart , and k. j. stebe . ., 1602.07179 , 2016 .k. s. kim , j. neu , and g. oster . ., 75:22742291 , 1998 .b. j. reynwar and m. deserno . ., 7:8567 , 2011 .a. ari and a. cacciuto . ., 108:118101 , 2012 .m. terasaki , l. b. chen , and k. fujiwara . ., 103:15571568 , 1986 . c. r. hopkins , a. gibson , m. shipman , and k. miller . ., 346:335339 , 1990 .d. heinrich , m. ecke , m. jasnin , u. engel , and g. gerisch . ., 106(5):10791091 , mar 2014 .i. dernyi , f. jlicher , and j. prost . . , 88(23):238101 ,may 2002 .g. koster , a. cacciuto , i. dernyi , d. frenkel , and m. dogterom . ., 94:068101 , 2005 .m. shaklee , t. idema , g. koster , c. storm , t. schmidt , and m. dogterom . ., 105(23):79937997 , 2008 .w. rawicz , k. c. olbrich , t. mcintosh , d. needham , and e. evans . ., 79:328339 , 2000 .y. li , r. lipowsky , and r. dimova . ., 108(12):47314736 , 2011 . d. marsh .crc press , second edition , 2013 .l. arleth , b. ashok , h. onyuksel , p. thiyagarajan , j. jacob , and r. p. hjelm . ., 21(8):32793290 , 2005 .authors : c. van der wel , d. heinrich , and d. j. kraftvideos are separately available online . here , still images of the videos are shown together with their captions .
|
understanding interactions between microparticles and membranes of living cells is vital for many applications as well as for unraveling their influence on our health and the environment . here , we study how microparticles interact with model lipid membranes and each other solely due to a short - ranged adhesive force between membrane and particles . using confocal microscopy , we establish that above a critical adhesion energy particles first attach to the membrane , then become wrapped , and finally may induce membrane tubulation . the microparticle adhesion state affects the mobility of the particle through a varying drag force that is imposed by the membrane , which is caused by variation in the density and spatial distribution of the membrane - particle linkages . we subsequently categorize the assembly pathways of attached and wrapped microparticles according to whether they are driven by minimization of membrane deformation energy or the strong adhesion energy between membrane and particles . we find that while attached particles do not interact by membrane deformation , they may aggregate when small adhesive vesicles are still present in solution . only wrapped particles exhibit a reversible attraction that is mediated purely by the deformation of the membrane . when wrapped and non - wrapped particles coexist , they form membrane - mediated dimers that subsequently aggregate into a variety of different structures . the experimental observation of distinct assembly pathways that are only induced by a membrane - particle adhesion shows that microparticles can assemble on lipid membranes even in the absence of a cytoskeleton or other active components .
|
our work discussed here is motivated by our studies of hopf bifurcations for reaction systems in chemistry and gene regulatory networks in systems biology , which are originally given by systems of ordinary differential equations .hopf bifurcations can be described algebraically , resulting in one very large multivariate polynomial equation subject to few much simpler polynomial side conditions , , . for such systems oneis interested in feasibility over the reals and , in the positive case , in at least one feasible point .it turns out that , generally , scientifically meaningful information can be obtained already by checking only the feasibility of , which is the focus of this article . for further details on the scientific background ,we refer the reader to our publications . with one of our models , viz ._ mitogen - activated protein kinase ( mapk ) _ , we obtain and solve polynomials of considerable size .our currently largest instance contains 863438 monomials in 10 variables .one of the variables occurs with degree 12 , all other variables occur with degree 5 .such problem sizes are clearly beyond the scope of classical methods in symbolic computation .to give an impression , the size of an input file with in infix notation is 30 mb large .latex - formatted printing of would fill more than 3000 pages in this document .the mapk model actually yields even larger instances , which we , unfortunately , can not generate at present , because in our toolchain maple can not produce polynomials larger than 32 mb .this article introduces an incomplete but terminating algorithm for finding real roots of large multivariate polynomials .the principle idea is to take an abstract view of the polynomial as the set of its exponent vectors supplemented with sign information on the corresponding coefficients . to that extent ,out approach is quite similar to tropical algebraic geometry .however , after our abstraction we do not consider tropical varieties but employ linear programming to determine certain suitable points in the newton polytope , which somewhat resembles successful approaches to sum - of - square decompositions .we have implemented our algorithm in reduce using direct function calls to the dynamic library of the lp solver gurobi . in practical computations on several hundred examples ,our method has failed do to its incompleteness in less than 8 percent of the cases .the longest computation time observed was around 16 s. as mentioned above , the limiting factor at present is the technical generation of even larger input . in section [ se : positive ] we introduce a specialization of our method that only finds roots with all positive coordinates .this is highly relevant in our context of reaction networks , where typically all variables are known to be positive .we also discuss an illustrating example in detail .section [ se : arbitrary ] generalizes our method to arbitrary roots . in section [se : practical ] we discuss issues and share experiences related to a practical implementation of our method . in section [ se : computations ] we evaluate the performance of our method with respect to efficiency and to its incompleteness on several hundred examples originating from four different chemical and biological models .denote , and let . for , vectors of either indeterminates or real numbers , and , we use the notations and .we will , however , never consider a vector to the power of a number .our notations are compatible with the standard scalar product as follows : consider a multivariate integer polynomial ,\ ] ] where for , which is called the _ support _ of .the _ newton polytope _ of is the convex hull of .it forms a polyhedron in , which we identify with its vertices , formally .the following lemma is a straightforward consequence of the convex hull property .[ le : newton ] let ] .let , and let .then the following are equivalent : a. the hyperplane defined by strictly separates the point from , and the normal vector is pointing from in direction . in particular , .b. there is s.t . .assume that ( i ) holds .the orientation of is chosen such that and for .define . then andwe can choose .vice versa , assume that ( ii ) holds .it follows that hence defined by is a hyperplane separating from , where the distance between and is at least .furthermore , is oriented as required in ( i ) .[ le : equiv ] let ] .let such that .then there is such that for all with the following hold : a. |(f,_1)a^_1|>|_i=2^s(f,_i)a^_i| , b. .\(i ) from it follows that . by lemma [ le : one ] we know and for .it follows that there is such that we are going to show that is a suitable choice , where for and for all , the inequalities ( [ eq : three_1 ] ) and ( [ eq : three_2 ] ) and monotony yield using the triangle inequality it follows that straightforwardly implies \(ii ) it follows from ( i ) that for the sign of the monomial determines the sign of . since , we obtain after these preparations we can state our first subalgorithm as algorithm [ alg : findpos ] .[ th : findposcorrect ] consider ] . on that basis algorithm[ alg : findz ] computes such that , where denotes the algebraic closure of .[ le : construct - zero ] consider ]. then the function terminates and returns either or with .when one is interested only in the _ existence _ of a zero of , then one can , in the positive case , obviously skip ` construct - zero ` and exit from ` find - zero ` after line 8 .notice that , in addition , one can then also exit early from ` find - positive ` after line 8 in algorithm [ alg : findpos ] .and the segment given by } ] .we apply ` find - zero ` to find a point on the variety of .figure [ fig : real ] pictures the variety .we obtain , and apply ` find - positive ` to .figure [ fig : tropical ] pictures the support of and indicates the newton polytope .we split into , , and , and we construct our first lp problem -2 & -1 & 1\\ 0 & 2 & -1\\ 2 & 0 & -1\\ 5 & 0 & -1\\ 0 & 3 & -1 \end{bmatrix*}\cdot({\mathbf{n}},c)^t\leq \begin{bmatrix * } -1\\ -1\\ -1\\ -1\\ -1\\ \end{bmatrix*}\ ] ] is infeasible , which confirms the observation in figure [ fig : tropical ] that . our next lp problem 0 & -2 & 1\\ 2 & 0 & -1\\ 5 & 0 & -1\\ 0 & 3 & -1 \end{bmatrix*}\cdot({\mathbf{n}},c)^t\leq \begin{bmatrix * } -1\\ -1\\ -1\\ -1\\ \end{bmatrix*}\ ] ] is feasible with and .figure [ fig : tropical ] shows the corresponding hyperplane given by .it strictly separates from , and its normal vector is oriented towards .we now know that for sufficiently large positive .in fact , already the relevant part of the moment curve for ] .substitution of the real algebraic number .2,0.3[}\bigr\rangle ] .for finding zeros with consider , for \alpha,\infty[} ] consider , and for unbounded consider introducing a new variable .consider } ] .let such that , and let . then there is such that for all with the following holds : define with for and .then we have for , , and is odd by definition of the minimal odd coordinate .it follows that for we have at least .this allows us to conclude from lemma [ le : becomepos ] ( i ) that hence determines the sign of . using the inequality in ( [ eq : becomeposgen ] ) we obtain [ th : findposgencorrect ]consider ] in line 1 of algorithm [ alg : findpos ] , which tells us that the corresponding polynomial is positive definite ( on the interior of the first hyperoctant ) . running our method on the 289 remaining instances, it fails in only 7.3 percent of the cases .table [ tab : bench ] shows detailed information for the single models .it also shows size ( number of monomials ) , dimension ( number of variables ) , and the largest degree of an occurring variable for the respective largest instance .it furthermore shows the maximal computation time for a single instance and the sum of computation times .all computations have been carried out on a 2.8 ghz xeon e5 - 4640 with the mip approach , yielding exact algebraic number solutions : .statistics for our practical computations[tab : bench ] [ cols="<,>,>,>,>,>",options="header " , ] notice that for our particular application the detection of definiteness by our implementation establishes a perfect result . from that point of view, one could argue that our method fails in only 3 percent of the cases .we would like to thank d. grigoriev , h. errami , w. hagemann , m. kota , and a. weber for valuable discussions .a. norman realized a robust foreign function interface for csl reduce .we are also grateful to gurobi optimization inc . and to the institute for making their excellent software free for academic purposes .this research was supported in part by the german transregional collaborative research center sfb / tr 14 avacs and by the anr / dfg project smart .f. boulier , m. lefranc , f. lemaire , p .- e .morant , and a. rgpl . on proving the absence of oscillations in models of genetic circuits .in _ proceedings of the algebraic biology 2007 _ , volume 4545 of _ lncs _ , pages 6680 , 2007 .h. errami , m. eiswirth , d. grigoriev , w. m. seiler , t. sturm , and a. weber .efficient methods to compute hopf bifurcations in chemical reaction networks using reaction coordinates . in _ proceedings of the casc 2013 _ , volume 8136 of _ lncs _ ,pages 8899 , 2013 .r. m. karp .reducibility among combinatorial problems . in r.e. miller , j. w. thatcher , and j. d. bohlinger , editors , _ complexity of computer computations _ , the ibm research symposia series , pages 85103 .springer , 1972 .j. c. f. sturm .mmoire sur la rsolution des quations numriques . in _mmoires prsents par divers savants trangers lacadmie royale des sciences , section sc . math ._ , volume 6 , pages 273318 , 1835 .
|
we describe a new incomplete but terminating method for real root finding for large multivariate polynomials . we take an abstract view of the polynomial as the set of exponent vectors associated with sign information on the coefficients . then we employ linear programming to heuristically find roots . there is a specialized variant for roots with exclusively positive coordinates , which is of considerable interest for applications in chemistry and systems biology . an implementation of our method combining the computer algebra system reduce with the linear programming solver gurobi has been successfully applied to input data originating from established mathematical models used in these areas . we have solved several hundred problems with up to more than 800000 monomials in up to 10 variables with degrees up to 12 . our method has failed due to its incompleteness in less than 8 percent of the cases .
|
vector optimization is another name for multiple criteria decision making .the mathematical technique of the field is rich but leaves much to be desired ( for instance , see ) .one of the reasons behind this is the fact that the classical areas of mathematics dealing with extremal problems pay practically no attention to the case of multiple criteria .so it seems reasonable to suggest attractive theoretical problems that involve many criteria .some geometrical problems of the sort were considered in . in this articlewe address similar problems over symmetric convex bodies , using the the same technique that stems from the classical alexandrov s approach to extremal problems of convex geometry .a _ convex figure _ is a compact convex set . a _ convex body _ is a solid convex figure .the _ minkowski duality _ identifies a convex figure in and its _ support function _ for . considering the members of as singletons , we assume that lies in the set of all compact convex subsets of .the minkowski duality makes into a cone in the space of continuous functions on the euclidean unit sphere , the boundary of the unit ball . the _ linear span _ ] be the space of translation - invariant measures , in fact , the linear span of the set of alexandrov measures . and $ ] are made dual by the canonical bilinear form ) .\endgathered\ ] ] for and , the quantity coincides with the _ mixed volume _ .consider the set of centrally symmetric cosets of convex compact sets .clearly , a translation - invariant linear functional is positive over if and only if the _ symmetrization _ is positive over . here is the dual of the descent of the even part operator on the factor - space , since the symmetrization of a measure is the dual of the even part operator over .we will denote the even part operator , its descent and dual by the same symbol . given a cone in a vector space in duality with another vector space ,the _ dual _ of is to a convex subset of and there corresponds the _ cone of feasible directions _ of at . let .then the dual of the cone of feasible directions of at may be represented as follows the description of the dual of the feasible cones are well known ( see ( * ? ? ? * preposition 4.3 ) ._ let and be convex figures . then _ ; if then ; ; if then for . from thisthe dual cones are available in the case of minkowski balls ._ let and be convex figures . then _ ; if then ; ; if then for .alexandrov observed that the gradient of at is proportional to and so minimizing over will yield the equality by the lagrange multiplier rule .but this idea fails since the interior of is empty .the fact that dc - functions are dense in is not helpful at all .alexandrov extended the volume to the positive cone of by the formula with the envelope of support functions below .he also observed that .the ingenious trick settled all for the minkowski problem . this was done in 1938 but still is one of the summits of convexity .in fact , alexandrov suggested a functional analytical approach to extremal problems for convex surfaces . to follow it directly in the general settingis impossible without the above description of the dual cones. the obvious limitations of the lagrange multiplier rule are immaterial in the case of convex programs . it should be emphasized that the classical isoperimetric problem is not a minkowski convex program in dimensions greater than 2 .the convex counterpart is the urysohn problem of maximizing volume given integral breadth .the constraints of inclusion type are convex in the minkowski structure , which opens way to complete solution of new classes of urysohn - type problems . *the external urysohn problem : * among the convex figures , circumscribing and having integral breadth fixed , find a convex body of greatest volume . _a feasible convex body is a solution to the external urysohn problem if and only if there are a positive measure and a positive real satisfying _ ; ; for all in the support of , i. e. . if then is a _ spherical lens _ and is the restriction of the surface area function of the ball of radius to the complement of the support of the lens to . if is an equilateral triangle then the solution looks as in fig . 1 : is the union of and three congruent slices of a circle of radius and centers , while is the restriction of to the subset of comprising the endpoints of the unit vectors of the shaded zone . fig. 2 presents the general solution of the internal urysohn problem inside a triangle in the class of minkowski balls . fig .1 fig . 2consider a bunch of economic agents each of which intends to maximize his own income .the _ pareto efficiency principle _ asserts that as an effective agreement of the conflicting goals it is reasonable to take any state in which nobody can increase his income in any way other than diminishing the income of at least one of the other fellow members . formally speaking ,this implies the search of the maximal elements of the set comprising the tuples of incomes of the agents at every state ; i.e. , some vectors of a finite - dimensional arithmetic space endowed with the coordinatewise order .clearly , the concept of pareto optimality was already abstracted to arbitrary ordered vector spaces .* vector isoperimetric problem over minkowski balls : * given are some convex bodies .find a symmetric convex body encompassing a given volume and minimizing each of the mixed volumes . in symbols , clearly , this is a slater regular convex program in the blaschke structure .* internal urysohn problem with flattening over minkowski balls : * given are some convex body and some flattening direction .considering of fixed integral breadth , maximize the volume of and minimize the breadth of in the flattening direction : _ for a feasible symmetric convex body to be pareto - optimal in the internal urysohn problem with the flattening direction over minkowski balls it is necessary and sufficient that there be positive reals and together with a convex figure satisfying _ the last program is slater - regular and so we may apply the _ lagrange principle_. in other words , the value of the program under consideration coincides with the value of the free minimization problem for an appropriate lagrangian : here is a positive lagrange multiplier .we are left with differentiating the lagrangian along the feasible directions and appealing to the description of the dual cones .note in particular that the relation is the _ complementary slackness condition _ standard in mathematical programming .the proof of the optimality criterion for the urysohn problem with flattening over minkowski balls complete .* rotational symmetry : * assume that a plane convex figure has the symmetry axis with generator . assume further that is the result of rotating around the symmetry axis in .consider the problem : * the external urysohn problem with flattening over minkowski balls : * given are some convex body and flattening direction .considering minkowski balls of fixed integral breadth , maximize volume and minimize breadth in the flattening direction : _ for a feasible convex body to be a pareto - optimal solution of the external urysohn problem with flattening over minkowski balls it is necessary and sufficient that there be positive reals and together with a convex figure satisfying _
|
under study are some vector optimization problems over the space of minkowski balls , i. e. , symmetric convex compact subsets in euclidean space . a typical problem requires to achieve the best result in the presence of conflicting goals ; e.g. , given the surface area of a symmetric convex body , we try to maximize the volume of and minimize the width of simultaneously .
|
large - scale , internet - connected distributed systems are notoriously difficult to manage . in a resource - sharing environment such as a peer - to - peer system that connects hundreds of thousands of computers in an ad - hoc network , intermittent resource participation , large and variable scale , and high failure rates are challenges that often impose performance tradeoffs .thus , existing p2p file - location mechanisms favor specific requirements : in gnutella , the emphasis is on accommodating highly volatile peers and on fast file retrieval , with no guarantees that files will always be located . in freenet , the emphasis is on ensuring anonymity . in contrast , distributed hash tables such as can , chord , pastry , and tapestry guarantee that files will always be located , but do not support wildcard searches .one way to optimize these tradeoffs is to understand user behavior . in this paperwe analyze user behavior in three file - sharing communities in an attempt to get inspiration for designing efficient mechanisms for large - scale , dynamic , self - organizing resource - sharing communities .we look at these communities in a novel way : we study the relationships that form among users based on the data in which they are interested .we capture and quantify these relationships by modeling the community as a _ data - sharing graph_. to this end , we propose a new structure that captures common user interests in data ( section [ sec : dsg ] ) and justify its utility with studies on three data - distribution systems ( section [ sec:3communities ] ) : a high - energy physics collaboration , the web , and the kazaa peer - to - peer network .we find small - world patterns in the data - sharing graphs of all three communities ( section[sec : swdsg ] ) .we discuss the causes of these emergent small - world patterns in section [ sec : hnzl ] .the significance of these newly uncovered patterns is twofold ( section [ sec : dsg - uses ] ) : first , it explains previous results and confirms ( with formal support ) the intuition behind them .second , it suggests ways to design mechanisms that exploit these naturally emerging patterns .it is not news that understanding the system properties can help guide efficient solution design .a well known example is the relationship between file popularity in the web and cache size .the popularity of web pages has been shown to follow a zipf distribution : few pages are highly popular and many pages are requested few times . as a result , the efficiency of increasing cache size is not linear : caching is useful for the popular items , but there is little gain from increasing the cache to provision for unpopular items . as a second example , many real networks are power law .that is , their node degrees are distributed according to a power law , such that a small number of nodes have large degrees , while most nodes have small degrees .adamic et al . propose a mechanism for probabilistic search in power - law networks that exploits exactly this characteristic : the search is guided first to nodes with high degree and their many neighbors . this way , a large percentage of the network is covered fast .this type of observations inspired us to look for patterns in user resources requests .but what patterns ?it is believed that the study of networks started with euler s solution of the knigsberg bridge problem in 1735 .the field has since extended from theoretical results to the analysis of patterns in real networks .social sciences have apparently the longest history in the study of real networks , with significant quantitative results dating from the 1920s . the development of the internet added significant momentum to the study of networks : by both facilitating access to collections of data and by introducing new networks to study , such as the web graph , whose nodes are web pages and edges are hyperlinks , the internet at the router and the as level and the email graph .the study of large real networks led to fascinating results : recurring patterns emerge in real networks ( see for good surveys ) .for example , a frequent pattern is the power - law distribution of node degree , that is , a small number of nodes act as hubs ( having a large degree ) , while most nodes have a small degree .examples of power - law networks are numerous and from many domains : the phone - call network ( long distance phone calls made during a single day ) , the citation network , and the linguistics network ( pairs of words in english texts that appear at most one word apart ) . in computer science ,perhaps the first and most surprising result at its time was the proof that the random graph - based models of the internet ( with their poisson degree distribution ) were inaccurate : the internet topology had a power - law degree distribution .other results followed : the web graph and the gnutella overlay ( as of year 2000 ) are also power - law networks .another class of networks are the `` small worlds '' .two characteristics distinguish small - world networks : first , a small average path length , typical of random graphs ( here ` path ' means shortest node - to - node path ) ; second , a large clustering coefficient that is independent of network size . the clustering coefficient captures how many of a node s neighbors are connected to each other . this set of characteristics is identified in systems as diverse as social networks , in which nodes are people and edges are relationships ; the power grid system of western usa , in which nodes are generators , transformers , substations , etc . and edges are transmission lines ; and neural networks , in which nodes are neurons and edges are synapses or gap junctions .newman shows that scientific collaboration networks in different domains ( physics , biomedical research , neuroscience , and computer science ) have the characteristics of small worlds .collaboration networks connect scientists who have written articles together .moreover , girvan and newman show that well - defined groups ( such as a research group in a specific field ) can be identified in ( small - world ) scientific collaboration networks . in parallel , a theoretical model for small - world networks by watts and strogatz pictures a small world as a loosely connected set of highly connected subgraphs . from here, the step is natural : since scientists tend to collaborate on publications , they most likely use the same resources ( _ share _ them ) during their collaboration : for example , they might use the same instruments to observe physics phenomena , or they might analyze the same data , using perhaps the same software tools or even a common set of computers .this means that if we connect scientists who use the same files , we might get a small world .even more , we might be able to identify groups that share the same resources . notice that the notion of `` collaboration '' transformed into `` resource sharing '' : the social relationships do not matter anymore , scientists who use the same resources within some time interval may never hear of each other .resource sharing in a ( predominantly ) scientific community is the driving force of computational grids .if we indeed see these naturally occurring sharing patterns and we find ways to exploit them ( e.g. , by identifying users grouped around common sets of resources ) , then we can build mechanisms that can tame the challenges typical of large - scale , dynamic , heterogeneous , latency - affected distributed systems .the research question now become clear : * _ are there any patterns in the way scientists share resources that could be exploited for designing mechanisms ? _ but resource sharing also exists outside scientific communities : peer - to - peer systems or even the web facilitate the sharing of data .another question arises : * _ are these characteristics typical of scientific communities or are they more general ? _ this article answers these two questions : it shows that small - world patterns exist in diverse file - sharing communities .to answer question _ q1 _ , we define a new graph that captures the virtual relationship between users who request the same data at about the same time ._ definition : the data - sharing graph is a graph in which nodes are users and an edge connects two users with similar interests in data . _ we consider one similarity criterion in this article : the number of shared requests within a specified time interval . to answer question _ q2_ , we analyze the data - sharing graphs of three different file - sharing communities .section [ sec:3communities ] presents briefly these systems and the traces we used .we discover that in all cases , for different similarity criteria , these data - sharing graphs are small worlds .the next sections show that using the data - sharing graph for system characterization has potential both for basic science , because we can identify new structures emerging in real , dynamic networks ( section [ sec : swdsg ] ) ; and for system design , because we can exploit these structures when designing data location and delivery mechanisms ( section [ sec : dsg - uses ] ) .we study the characteristics of the data - sharing graph corresponding to three file - sharing communities : a high - energy physics collaboration ( section [ sec : d0 ] ) , the web as seen from the boeing traces ( section [ sec : www ] ) , and the kazaa peer - to - peer file - sharing system seen from a large isp in israel ( section [ sec : kazaa ] ) .this section gives a brief description of each community and its traces ( duration of each trace , number of users and files requested , etc . ) in addition , we present the file popularity and user activity distributions for each of these traces as these have a high impact on the characteristics of the data - sharing graph : intuitively , a user with high activity is likely to map onto a highly connected node in the data sharing graph .similarly , highly popular files are likely to produce dense clusters ..characteristics of traces analyzed . [ cols= " < , > , > , > , > " , ] [ table : newmans - model ] table [ table : newmans - model ] leads to two observations .first , the actual clustering coefficient in the data - sharing graphs is always larger than predicted and the average degree is always smaller than predicted .an interesting new question emerges : what is the explanation for these ( sometimes significant ) differences ?one possible explanation is that user requests for files are not random : their preferences are limited to a set of files , which explains the actual average degree being smaller than predicted .a rigorous understanding of this problem is left for future work .a second observation is that we can perhaps compare the file sharing in the three communities by comparing their distance from the theoretical model .we see that the kazaa data - sharing graphs are the closest to the theoretical model and the d0 graphs are very different from their corresponding model .this is different from the comparison with the erds - rnyi random graphs ( table [ table : all ] ) .the cause of this difference and the significance of this observation remain to be studied in the future .event frequency has been shown to follow a zipf distribution in many systems , from word occurrences in english and in monkey - typing texts to city population .it is also present in two of the three cases we analyze : the web and kazaa .other patterns characteristic to data access systems include time locality , in which an item is more popular ( and possibly requested by multiple users ) during a limited interval and temporal user activity , meaning that users are not uniformly active during a period , but follow some patterns ( for example , downloading more music files during weekends or holidays ) .thus , we ask : * _ are the patterns we identified in the data - sharing graph , especially the large clustering coefficient , an inherent consequence of these well - known behaviors ? _ to answer this question , we generate random traces that preserve the documented characteristics but break the user - request association . from these synthetic traces ,we build the resulting data - sharing graphs , and analyze and compare their properties with those resulting from the real traces .the core of our traces is a triplet of user i d , item requested and request time .figure [ fig : urt - relationship ] identifies the following correlations in traces , some of which we want to preserve in the synthetic traces : 1 .user time : user s activity varies over time : for example , in the d0 traces , some users accessed data only in may .request time : items may be more popular during some intervals : for example , news sites are more popular in the morning . 3 . user request: this is the key to user s preferences . by breaking this relationship and randomly recreating it, we can analyze the effect of user preferences on the properties of the data - sharing graph .4 . user : the number of items requested per user over the entire interval studied may be relevant , as some users are more active than others ( see figures [ fig : web - reqs - per - user ] left for the web traces ) .time : the time of the day ( or in our case , of the periods studied ) is relevant , as the web traces show ( the peak in figure [ fig : web - reqs - per - user ] right ) .request : this is item popularity : number of requests for the same item .our aim is to break the relationship ( 3 ) , which implicitly requires the break of ( 1 ) , ( 2 ) , or both .we also want to preserve relationships ( 4 ) , ( 5 ) , and ( 6 ) .one can picture the traces as a matrix , in which is the number of requests in that trace and the three columns correspond to users , files requested , and request times , respectively .now imagine the we shuffle the users column while the other two are kept unchanged : this breaks relations ( 3 ) and ( 1 ) . if the requests column is shuffled , relations ( 3 ) and ( 2 ) are broken .if both user and request columns are shuffled , then relations ( 1 ) , ( 2 ) , and ( 3 ) are broken . in all cases , ( 4 ) , ( 5 ) , and ( 6 ) are maintained faithful to the real behavior : that is , users ask the same number of requests ( 4 ) ; the times when requests are sent are the same ( 5 ) ; and the same requests are asked and repeated the same number of times ( 6 ) .we generated synthetic traces in three ways , as presented above : 1 .no correlation related to time is maintained : break relations ( 1 ) , ( 2 ) , and ( 3 ) .2 . maintain the request times as in the real traces : break relations ( 1 ) and ( 3 ) .3 . maintain the user s activity over time as in the real traces : break ( 2 ) and ( 3 ) .three characteristics of the synthetic data - sharing graphs are relevant to our study .first , the number of nodes in synthetic graphs is significantly different than in their corresponding real graphs ( `` corresponding '' in terms of similarity criterion and time ) .on the one hand , the synthetic data - sharing graphs for which user activity in time ( relation ( 1 ) ) is not preserved have a significantly larger number of nodes . even when the user activity in time is preserved ( as in the st3 case ), the number of nodes is larger : this is because in the real data - sharing graphs , we ignored the isolated nodes and in the synthetic graphs there are no isolated nodes . on the other hand , when the similarity criterion varies to a large number of common requests ( say , 100 in the d0 case , figure [ fig : d0-nnodes ] ) , the synthetic graphs are much smaller or even disappear .this behavior is explained by the distribution of weights in the synthetic graphs ( figure [ fig : synth - d0-wd ] ) : compared to the real graphs ( figure [ fig : d0-weightdistrib ] ) , there are many more edges with small weights .the median weight in the real d0 data - sharing graphs is 356 and the average is 657.9 , while for synthetic graphs the median is 137 ( 185 for st3 ) and the average is 13.8 ( 75.6 for st3 ) .second , the synthetic data - sharing graphs are always connected ( unlike real graphs , that always have multiple connected components , as shown in table [ table : all ] ) . even for similarity criteria with large number of common requeststhe synthetic graphs remain connected .this behavior is due to the uniform distribution of requests per user in the case of synthetic traces , which is obviously not true in the real case .third , the synthetic data - sharing graphs are `` less '' small worlds than their corresponding real graphs : the ratio between the clustering coefficients is smaller and the ratio between average path lengths is larger than in real data - sharing graph ( figure [ fig : synth - sw - d0 ] ) .however , these differences are not major : the synthetic data - sharing graphs would perhaps pass as small worlds .these results show that user preferences for files have significant influence on the data - sharing graphs : their properties are not induced ( solely ) by user - independent trace characteristics , but human nature has some impact .so perhaps the answer to this section title ( `` human nature or zipf s law ? '' ) is `` both '' .however , it seems that identifying small - world properties is not a sufficient metric to characterize the natural interest - based clustering of users : we might need a metric of how small world a small - world data - sharing graph is .this problem remains to be studied further in the future .it is interesting to notice that the structure we call the data - sharing graph can be applied at various levels and granularities in a computing system .we looked at relationships that form at the file access level , but intuitively similar patterns could be found at finer granularity , such as access to same memory locations or access to same items in a database .for example , a recent article investigates the correlation of program addresses that reference the same data and shows that these correlations can be used to eliminate load misses and partial hits . at a higher level, the data - sharing graph can identify the structure of an organization based on the applications its members use , for example by identifying interest - based clusters of users and then use this information to optimize an organization s infrastructure , such as servers or network topology .in this section we focus on implications for mechanism design of the data - sharing graph from two perspective : its structure ( definition ) and its small - world properties .we stress that these are untested but promising ideas for future work .some recommender systems have a similar flavor to the data - sharing graph .referralweb attempts to uncover existing social networks to create a referral chain of named individuals .it does this by inferring social relationships from web pages , such as co - authorship , research groups and interests , co - participation in discussion panels , etc .this social network is then used to identify experts and to guide searches around them .sripanidkulchai et .al came close to the intuition of the data - sharing graph in their infocom 2003 article : they improve gnutella s flooding - based mechanism by inserting and exploiting interest - based shortcuts between peers .interest - based shortcuts connect a peer to peers who provided data in the past .this is slightly different from our case , where an edge in the data - sharing graph connects peers that requested the same data .however , the two graphs are likely to overlap significantly if peers store data of their own interest .our study distinguishes by its independence from any underlying infrastructure ( in this case , the distribution of data on peers and the location mechanism ) and gives a theoretical explanation of the performance improvements in .the data - sharing graph can be exploited for a variety of decentralized file management mechanisms in resource - sharing systems ( such as peer - to - peer or grids ) . * in a writable file - sharing system , keeping track of which peers recently requested a file facilitates the efficient propagation of updates in a fully decentralized , self - organizing fashion ( a similar idea is explored in ) . * in large - scale , unreliable , dynamic peer - to - peer systems file replication may be used to insure data availability and transfer performance .the data - sharing graph may suggest where to place replicas closer to the nodes that access them .similarly , it may be useful for dynamic distributed storage : if files can not be stored entirely on a node , then they can be partitioned among the nodes that are interested in that file .* in a peer - to - peer computing scenario , the relationships between users who requested the same files can be exploited for job management . if nodes store and share recently downloaded files , they become good candidates for running jobs that take those files as input .this can be used for scheduling , migrating or replicating data - intensive jobs .the idea underlying the data - sharing graph was first presented in as a challenge to design a file - location mechanism that exploits the small - world characteristics of a file - sharing community .meanwhile we completed the design and evaluation of a mechanism that dynamically identifies interest - based clusters , disseminates location information in groups of interested users , and propagates requests among clusters .its strengths come from mirroring and adapting to changes in user s behavior .file insertion and deletion are low cost , which makes it a good candidate for scientific collaborations , where use of files leads to creation of new files .this article reveals a predominant pattern in diverse file - sharing communities , from scientific communities to the web and file - swapping peer - to - peer systems .this pattern is brought to light by a structure we propose and that we call `` data - sharing graph '' .this structure captures the relationships that form between users who are interested in the same files .we present properties of data - sharing graphs from three communities .these properties are relevant to and might inspire the design of a new style of mechanisms in peer - to - peer systems , mechanisms that take into account , adapt to , and exploit user s behavior .we also sketch some mechanisms that could benefit from the data - sharing graph and its small - world properties .ian clarke , oskar sandberg , brandon wiley , and theodore w. hong , `` freenet : a distributed anonymous information storage and retrieval system , '' in _ international workshop on designing privacy enhancing technologies _ , berkeley , ca , 2000 , vol .44 - 66 , springer - verlag .lauri loebel - carpenter , lee lueking , carmenita moore , ruth pordes , julie trumbo , sinisa veseli , igor terekhov , matthew vranicar , stephen white , and victoria white , `` sam and the particle physics data grid , '' in _ proceedings of computing in high - energy and nuclear physics_. beijing , china , 2001 .kavitha ranganathan , adriana iamnitchi , and ian foster , `` improving data availability through dynamic model - driven replication in large peer - to - peer communities , '' in _ global and peer - to - peer computing on large scale distributed systems workshop_. 2002 .adriana iamnitchi , matei ripeanu , and ian foster , `` locating data in ( small - world ? ) peer - to - peer scientific collaborations , '' in _1st international workshop on peer - to - peer systems ( iptps02)_. 2002 , lncs hot topics series , springer - verlag .
|
web caches , content distribution networks , peer - to - peer file sharing networks , distributed file systems , and data grids all have in common that they involve a community of users who generate requests for shared data . in each case , overall system performance can be improved significantly if we can first identify and then exploit interesting structure within a community s access patterns . to this end , we propose a novel perspective on file sharing based on the study of the relationships that form among users based on the files in which they are interested . we propose a new structure that captures common user interests in data the _ data - sharing graph_ and justify its utility with studies on three data - distribution systems : a high - energy physics collaboration , the web , and the kazaa peer - to - peer network . we find small - world patterns in the data - sharing graphs of all three communities . we analyze these graphs and propose some probable causes for these emergent small - world patterns . the significance of small - world patterns is twofold : it provides a rigorous support to intuition and , perhaps most importantly , it suggests ways to design mechanisms that exploit these naturally emerging patterns .
|
dynamical systems typically model complicated deterministic processes on a phase space . the map induces a natural action on probability measures on via . of particular interest in ergodic theory are those probability measures that are -invariant ; that is , satisfying . if is ergodic , then such describe the time - asymptotic distribution of orbits of -almost - all initial points . in this paper, we consider the situation where a `` hole '' is introduced and any orbits of that fall into terminate .the hole induces an _ open dynamical system _ , where .because trajectories are being lost to the hole , in many cases , there is no -invariant probability measure .one can , however , consider _ conditionally invariant _ probability measures , which satisfy , where is identified as the _ escape rate _ for the open system .we will study drawn from the class of lasota - yorke maps : piecewise expanding maps of the interval , such that has bounded variation .the hole will be a finite union of intervals .in such a setting , because of the expanding property , one can expect to obtain conditionally invariant probability measures that are _ absolutely continuous _ with respect to lebesgue measure .such conditionally invariant measures are `` natural '' as they may correspond to the result of repeatedly pushing forward lebesgue measure by . in the next sectionwe will discuss further conditions due to that make this precise : ( i ) how much of phase space can `` escape '' into the hole , and ( ii ) the growth rate of intervals that partially escape relative to the expansion of the map and the rate of escape .these conditions also guarantee the existence of a _unique _ absolutely continuous conditionally invariant probability measure ( accim ) .this accim , with density , and its corresponding escape rate are the first two objects that we will rigorously numerically approximate using ulam s method . existence anduniqueness results for subshifts of finite type with markov holes were previously established by collet , martnez and schmitt in ; see also .one may also consider the set of points that never fall into the hole .a probability measure on can be defined as the limit of the accim conditioned on . the measure will turn out to be the unique -invariant measure supported on and has the form , where is a lebesgue integrable function and is known as the _ quasi - conformal measure _ for .we will also rigorously numerically approximate and thus .robustness of these objects with respect to ulam discretizations is essentially due to a _quasicompactness _ property , and a significant part of the paper is devoted to elaborating on this point .our main result , theorem [ mainthm ] , concerns convergence properties of an extension of the well - known construction of ulam , which allows for efficient numerical estimation of invariant densities of closed dynamical systems .the ulam approach partitions the domain into a collection of connected sets and computes single - step transitions between partition sets , producing the matrix li demonstrated that the invariant density of lasota - yorke maps can be -approximated by step functions obtained directly from the leading left eigenvector of . since the publication of there have been many extensions of ulam s method to more general classes of maps , including expanding maps in higher dimensions , uniformly hyperbolic maps , nonuniformly expanding interval maps , and random maps .explicit error bounds have also been developed , eg . .we will show that in order to handle open systems , the definition of above need only be modified to , having entries as in the closed setting , one uses the leading left eigenvector to produce a step function that approximates the density of the accim .however , in the open setting , the leading eigenvalue of also approximates the escape rate of , and the _ right _ eigenvector approximates the quasi - conformal measure . note that for closed systems , and .the literature concerning the analysis of ulam s method is now quite large .early work on ulam s method for axiom a repellers showed convergence of an ulam - type scheme using markov partitions for the approximation of pressure and equilibrium states with respect to the potential .these results apply to the present setting of lasota - yorke maps _ provided the hole is markov and projections are done according to a sequence of markov partitions ._ bahsoun considered non - markov lasota - yorke maps with non - markov holes and rigorously proved an ulam - based approximation result for the escape rate .bahsoun used the perturbative machinery of , treating the map as a small deterministic perturbation of the closed map .in contrast , we apply the perturbative arguments of directly to the open map , considering the ulam discretization as a small perturbation of .the advantage of this approach is that we can obtain approximation results whenever the existence results of apply .the latter make assumptions on the expansivity of ( large enough ) , the escape rate ( slow enough ) , and the rate of generation of `` bad '' subintervals ( small enough ) . from these assumptionswe construct an improved lasota - yorke inequality that allows us to get tight enough constants to make applications plausible . besides estimating the escape rate , we obtain rigorous -approximations of the accim and approximations of the quasi - conformal measure that converge weakly to .we can treat relatively large holes .an outline of the paper is as follows . in section [s : setup ] we introduce the perron - frobenius operator , formally define admissible and ulam - admissible holes , and develop a strong lasota - yorke inequality .section [ s : results ] introduces the new ulam scheme and states our main ulam convergence result .section [ s : ex ] discusses some specific example maps in detail .proofs are presented in section [ s : pfs ] .the following class of interval maps with holes was studied by liverani and maume - deschamps in . [def : lymap ] let ] .non - negative right eigenvectors of induce measures on according to the formula\,m(i_j\cap e).\ ] ] we conclude the section with the following .[ lem : qcmeasurefromevector ] let be the matrix representation of with respect to the basis . if then the measure corresponding to satisfies for every .let and put .then , = \sum_{j=1}^k \int_{i_j}\pi_k\varphi\,dm\,[\psi_k]_j\\ & = \sum_{j , j'=1}^k\int_{i_j}\varphi_k\,dm\,(p_k)_{jj'}[\psi_k]_{j'}(\rho_k)^{-1 } = \sum_{j'=1}^k\int_{i_{j'}}{\mathcal{l}_k}\varphi_k\,dm\,[\psi_k]_{j'}(\rho_k)^{-1}\\ & = ( \rho_k)^{-1}\,\int { \mathcal{l}_k}\varphi_k\,d\mu_k={\rho_k}^{-1}\mu_k({\mathcal{l}_k}\varphi_k),\end{aligned}\ ] ] where the last equality in the second line follows from the fact that is the matrix representing in the basis , and acts on densities by right multiplication ( i.e. if is the vector representing the function , then is the vector representing ) .the main result of this paper is the following .its proof is presented in [ sec : pfmainthm ] .[ mainthm ] let be a lasota - yorke map with an -ulam - admissible hole .let be the unique accim for the open system , and the unique quasi - conformal measure for the open system supported on , as guaranteed by theorem [ thm : livemaume ] .let be the associated escape rate . for each ,let be the leading eigenvalue of the ulam matrix .let be densities induced from non - negative left eigenvectors of corresponding to .let be measures induced from non - negative right eigenvectors of corresponding to .then , a. [ it : i ] for sufficiently large , is a simple eigenvalue for .furthermore , , and there exists with is valid . ] such that , where is the maximum diameter of the elements of . b. [ it : ii ] in . c. [ it : iii ] in the weak- * topology of measures .furthermore , for every sufficiently large , . we will also establish a relation between admissibility and ulam - admissibility of holes .[ lem : admissibleandulamadmissible ] if is an -admissible hole for , there is some such that is -ulam - admissible for .the proof of this lemma is presented in [ subs : admvsulamadm ] .this result , together with lemma [ lem : enlargingholes ] , broadens the scope of applicability of theorem [ mainthm ] by allowing to ( i ) replace the map by an iterate ( lemma [ lem : admissibleandulamadmissible ] ) , or ( ii ) enlarge the hole in a dynamically consistent way ( lemma [ lem : enlargingholes ] ) .it also ensures that several examples in the literature can be treated with our method ; in particular , all the examples presented in .to illustrate the efficacy of ulam s method , beyond the small - hole setting , we present some examples of ulam - admissible open lasota - yorke systems .we start with the case of full - branched maps in [ subs : fullbranchedmaps ] , and treat some more general examples , including -shifts , in [ subs : nonfullbranchedmaps ] .we then analyze lorenz - like maps .they provide transparent evidence of the scope of the results for open systems , as well as closed systems with repellers .they also illustrate how the admissibility hypothesis may be checked in applications . in the next examples , we will use the following notation . given a lasota - yorke map with holes , with monotonicity partition , we let , and . thus , the elements of are precisely the ones contained in that are full branches for , and those of are the remaining ones . a _ full - branched map with holes _ , , is a lasota - yorke map with holes , such that . for piecewise linear maps ,the situation is rather simple .[ rmk : pwlinearfullbranch ] let be a piecewise linear full - branched map with holes .then , for every the following holds : , if is a piecewise linear full - branched map , then each interval is good . observing that an interval being good is equivalent to having non - zero measure , and using the fact that is atom - free , each may be split into two good intervals in such a way that there is at most one discontinuity of on each .thus , . therefore .also , on the other hand , .in fact , in the piecewise linear , full branched setting , a direct calculation shows that lebesgue measure is an accim for the open system . for perturbations of these systems , explicit estimates of and not generally available .however , we have the following bounds .[ lem : fullbranchemapswholes ] let be a full - branched map with holes .then , for every , there exists some computable such that , where is obtained from by enlarging the hole , as in lemma [ lem : enlargingholes ] .furthermore , an immediate consequence is the following .[ cor : checkqcfullbranch ] in the setting of lemma [ lem : fullbranchemapswholes ] , if , then is -ulam admissible for . in this case , lemma [ lem : enlargingholes ] allows one to approximate the escape rate , accim and quasi - conformal measure for via theorem [ mainthm ] applied to . first , let us note that for any map with , we have that , as the map has at least one fixed point outside the hole . if is sufficiently large , each interval is either ( i ) contained in , and thus not in or ( ii ) and . in the latter case , is a good interval for , because . since good intervals for and for coincide ( see beginning of proof of lemma [ lem : enlargingholes ] ) , we get that .furthermore , the bound on follows directly from the definition . the following is an interesting consequence of lemmas [ rmk : pwlinearfullbranch ] and [ lem : fullbranchemapswholes ] .[ cor : pwlinearfullbranched ] let be a piecewise linear full - branched map with holes .assume that .then , if is sufficiently small , is -ulam - admissible for any full - branched map that is a sufficiently small perturbation of ( where the topology is defined , for example , by the norm given by the maximum of the norms of each branch ) .in particular , theorem [ mainthm ] applies .the statement for follows from remark [ rmk : pwlinearfullbranch ] . for perturbations, the statement follows from lemma [ lem : fullbranchemapswholes ] , by observing that the quantities and , as well as the variation of on each interval depend continuously on , with respect to the topology .corollary [ cor : pwlinearfullbranched ] can apply to maps with arbitrarily large holes , as the next example shows .let , ] or ] . also , when and is a single interval contained in }{\beta},1] ] vs. . note that approximates lebesgue measure on ] vs. .note that approximates lebesgue measure on ] ( shown in red ) .the computed value of is 0.8475 ( 4 s.f . ) , and in fact agrees up to 11 s.f . with the exact value for ( the length of : ) .left : graph of computed density of accim ( note that the function is a fixed point of both and ) .right : the approximate quasi - conformal measure , depicted as ) ] ( shown in red ) .the computed value of is 0.8475 ( 4 s.f . ) , and in fact agrees up to 11 s.f . with the exact value for ( the length of : ) .left : graph of computed density of accim ( note that the function is a fixed point of both and ) .right : the approximate quasi - conformal measure , depicted as ) ] ( shown in red ) .the computed value of is 0.9086 ( 4 s.f . ) left : graph of approximate density of accim .right : the approximate quasi - conformal measure , depicted as ) ] ( shown in red ) .the computed value of is 0.9086 ( 4 s.f . ) left : graph of approximate density of accim .right : the approximate quasi - conformal measure , depicted as ) ] : where .when , the system is open and the hole is implicitly defined as ) ] ; bounds of the interval ] , where ; this is illustrated in red in figure [ fig : lorenz_map ] .the escape rates of the system for parameters , are illustrated in figure [ fig : lorenz_leading_eval ] .figure [ fig : lorenz1d_quasi - conformal ] ( left ) illustrates the cumulative distribution functions of the quasi - conformal measures , , for and various values of .the densities of the accims with respect to lebesgue are illustrated in figure [ fig : lorenz1d_variousalpha ] for several values .for , the densities become concentrated near the endpoints , as the plot in figure [ fig : lorenz1d_variousalpha ] ( right ) illustrates . bins .left : coloured image of leading eigenvalue for a range of and ( light for near 1 , dark near ) .right : as a function of for .,title="fig:",width=264 ] bins . left : coloured image of leading eigenvalue for a range of and ( light for near 1 , dark near ) .right : as a function of for .,title="fig:",width=264 ] + the escape rate results for these one - dimensional maps can be interpreted coherently with respect to the behaviour of the lorenz system ( although the scenarios differ according to whether ) . * regarding as a map on , for each value of and there are two pairs of fixed points : repellors at ( illustrated in green in figure [ fig : lorenz_map ] ) and an attracting _ outer _pair with ( beyond the domain of figure [ fig : lorenz_map ] ) . the _ inner points _ correspond to the periodic orbits from the lorenz flow , and the outer pair correspond to the attracting fixed points of the flow .* at some the inner and outer pairs coalesce in a saddle - node bifurcation and for the only attractor is a chaotic absolutely continuous invariant measure supported on ] .lebesgue a.e .orbit escapes and is asymptotic to one of the `` outer fixed points '' . at points become fixed points , with .the open system thus ` closes up ' as decreases to ; this corresponds to the bifurcation point in the lorenz flow ( where the origin connects to ) . for values of , admits an acim ( which can be accessed numerically by ulam s method ) and the quasiconformal measure is simply lebesgue measure .the approach of to as can be seen in figure [ fig : lorenz_leading_eval ] , and the close agreement of with lebesgue measure can be seen in figure [ fig : lorenz1d_quasi - conformal ] ( left ) for .* for , is open on ] and _ coexist with a chaotic repellor in ] is a lasota - yorke map with holes , because it is piecewise expanding shows that admissibility of the open system is implied if . for ,our main theorem holds for the application of ulam s method to on ] : all points in the intervals escape in finitely many iterations , and corresponding cells of the partitions used in ulam s method are `` transient '' .the leading eigenvalue from ulam s method and approximate quasi - conformal measure on ] .the approximate accims agree ( modulo scaling ) between , the only difference is that the different lead to a different concentration of mass on preimages of the hole .the approximated escape rates are displayed in figure [ fig : lorenz_leading_eval ] , and concentration of accim on the hole ( neighbourhoods of ) is evident in figure [ fig : lorenz1d_variousalpha ] ( right ) .note also that figure [ fig : lorenz1d_quasi - conformal ] ( left ) depicts some approximate quasiconformal measures for and .( ) . left : cumulative distribution functions for where .right : accims for ( same ).,title="fig:",width=264 ] ( ) . left : cumulative distribution functions for where .right : accims for ( same ).,title="fig:",width=264 ] + [ fig : lorenz1d_variousalpha ]under the assumptions of theorem [ thm : livemaume ] , the quasi - conformal measure of satisfies some further properties that will be exploited in our approach .the measure can be used to define a useful cone of functions in . for each combining the result of lemmas 4.2 and 4.3 from with the argument in the proof of lemma 3.7 ( therein ) , the conditions on imply the existence of a constant such that for any there is an and such that the values of , and are all computable in terms of the constants associated with .we present a modified version of these arguments , based on the classical work of rychlik , that specialize to the case , and allow us to improve some of the constants involved in the estimates of .[ lem : ly1step ] let be a lasota - yorke map with an -ulam - admissible hole . then , there exists such that for every , furthermore , there is a constant such that for any there is an such that let be the monotonicity partition for .define by for every , and otherwise .we obtain the following lasota - yorke inequality by adapting the approach of rychlik ( * ? ? ?* lemmas 4 - 6 ) .let .then , we slightly modify to account for the jumps at the hole , and define by .now , only elements of contribute to the variation of , and we get thus , since for every , , one has that now we proceed as in the proof of ( * ? ? ?* lemma 2.5 ) , and observe that there exists such that if , then whereas if , we let be the nearest good partition element is an open system with an admissible hole , then , and get where is an interval that contains and has as an endpoint , fixed in advance , such that , after possibly redefining at the discontinuity points of , .notice that either or , where is the union of with the contiguous elements of on the right of , and is defined in a similar manner .thus , where the factor 2 appears due to the fact that a single good interval could have at most bad intervals on the left and bad intervals on the right .combining equations and , we get plugging back into , we get we get the first part of the lemma by choosing . for the second part, we recall that , so for every , we have that thus , , provided .let be the -fold monotonicity partition for where is such that for all .this choice is possible in view of ( * ? ? ?* lemma 3.10 ) .choose such that every subinterval intersects at most two such .then , if is a subinterval of length , there are elements such that ; hence .now let be a partition of into subintervals of length and put then , and , where denotes the variation of inside the interval .thus , we now estimate putting completes the proof .fix satisfying the hypotheses . by theorem [ thm :livemaume ] , as .choose large enough so that for all . because there are a finite number of and we can put and obtain for all satisfying the hypotheses .note that this implies because the support of the integrand is possibly enlarged by taking ulam projections .this now implies .the lemmas presented in [ subs : auxlem ] allow us to derive parts ( i ) and ( ii ) of theorem [ mainthm ] via the perturbative approach from .indeed , theorem [ thm : livemaume ] shows that is the leading eigenvalue of , and that it is simple .furthermore , is a small perturbation of for large , in the sense that as .indeed , and the latter is proportional to , the diameter of the partition , which tends to 0 as .since decreases variation , corollary [ cor : lyineq ] implies the uniform inequality which is the last hypothesis to check to be in the position to apply the perturbative machinery of .this result ensures that for sufficiently large , has a simple eigenvalue near , and its corresponding eigenvector converges to in , giving the convergence statements in and . in order to show, we consider the operator . in view of lemma [ lem : qcmeasurefromevector ] , , and . as in the previous paragraph, one can check that is a small perturbation of .in fact , also , the lasota - yorke inequality holds with replaced by .thus , ( * ? ? ?* corollary 1 ) ( see ( iii ) below ) shows that for large , is the leading eigenvalue of .let be the spectral projectors defined by where is small enough to exclude all spectrum of apart from the peripheral eigenvalue .also let then , ( * ? ? ?* corollary 1 ) provides , and for which since is simple and isolated , this setup implies that for large enough , each is a bounded , rank- operator on : where each , and .since we can choose so that ] ( ) . choose such that , such that =\int_{i_j}h\,dm=\int_{i_j}h\,d\mu_k>0 $ ] and as in lemma [ lem : garys ] . then , = \rho^{-n}[{p_k}^n\psi_k]_i\geq \rho^{-n}[{p_k}^n]_{ij}[\psi_k]_j>0.\ ] ] this establishes that and hence that , as claimed .let be the transfer operator associated to .that is , .then , , and therefore , hence , an interval is good for if and only if it is good for for every . in the rest of this proofwe will say an interval is good if it is good for either ( and therefore all ) .let , where is the partition of into intervals , and we recall that is the monotonicity partition of .let be an -adequate partition for .then , a partition may be constructed by cutting each element of in at most pieces , where is independent of , in such a way that the variation requirement is satisfied , and thus is an -adequate partition for .indeed , is a possible choice .the term 2 allows one to account for possible jumps at the boundary points of , as there are at most two of them in each .the term allows one to split each interval into at most subintervals , in such a way that for every , the chosen value of is necessary to account for the possible discrepancy between and .( recall also that is continuous on each . )now , let .then , each bad interval of gives rise to at most ( necessarily bad ) intervals in .when a good interval of is split , it also gives rise to at most intervals in . in this casesome of the intervals may be bad , but it is guaranteed that at least one of them remains good , as being good is equivalent to having non - zero measure .thus , the number of contiguous bad intervals in is at most , where is the number of contiguous bad intervals in .therefore , . clearly , .finally , we will show that .recall that is the leading eigenvalue of .let be nonzero and such that .we claim that , which yields the inequality , because necessarily .indeed , where the second equality follows from the fact that is supported on .the third one , from the fact that is supported on . the last one , because .the first statement of the lemma follows .the relations between escape rates , accims and quasi - conformal measures follow from comparing via equation the statements of part ( 4 ) of theorem [ thm : livemaume ] applied to and .+ assume is an -admissible hole for .then , is an open lasota - yorke map .fix so that for all sufficiently large , then , . by possibly making , we can assume that , and that .then , .the authors thank banff international research station ( birs ) , where the present work was started , for the splendid working conditions provided .cb s work is supported by an nserc grant .gf is partially supported by the unsw school of mathematics and an arc discovery project ( dp110100068 ) , and thanks the department of mathematics and statistics at the university of victoria for hospitality .cgt was partially supported by the pacific institute for the mathematical sciences ( pims ) and nserc .rm thanks the department of mathematics and statistics ( university of victoria ) for hospitality during part of the period when this paper was written .p. collet , s. martnez , and b. schmitt . quasi - stationary distribution and gibbs measure of expanding systems . in _ instabilities and nonequilibrium structures ,v ( santiago , 1993 ) _ , volume 1 of _ nonlinear phenom .complex systems _ , pages 205219 .kluwer acad .publ . , dordrecht , 1996 .
|
ulam s method is a rigorous numerical scheme for approximating invariant densities of dynamical systems . the phase space is partitioned into connected sets and an inter - set transition matrix is computed from the dynamics ; an approximate invariant density is read off as the leading left eigenvector of this matrix . when a hole in phase space is introduced , one instead searches for _ conditional _ invariant densities and their associated escape rates . for lasota - yorke maps with holes we prove that a simple adaptation of the standard ulam scheme provides convergent sequences of escape rates ( from the leading eigenvalue ) , conditional invariant densities ( from the corresponding left eigenvector ) , and quasi - conformal measures ( from the corresponding right eigenvector ) . we also immediately obtain a convergent sequence for the invariant measure supported on the survivor set . our approach allows us to consider relatively large holes . we illustrate the approach with several families of examples , including a class of lorenz maps .
|
when solving classification problems in a supervised or semi - supervised fashion , it is always necessary to somehow label samples from the training data .incorporating these labeled examples into the training process enables the model to assign a class label either directly to an unknown example or to the implicit category that the example belongs to .a common approach is applying labels to all or to a subset of the training examples prior to training .this process can be very time - consuming , especially if there are many examples and a large subset of them should be labeled . using semi - supervised learning ,it is possible to train a sufficient model while having only a subset of the training data enriched with labels .however , it is still necessary to label some samples prior to training .+ this paper presents an alternative approach for the classification of images , which works in the reverse order : first , train a generative model of the data and afterwards apply labels to samples from the trained model .similar ideas have been pursued in the field of face recognition , e.g. by using unsupervised clustering prior to a manual labeling task , however , we want take a more general approach .reversing the order has some advantages over the classical way : first , it is possible to label more examples in a shorter period of time by showing the human labeler a constantly changing stream of model samples .second , it is possible to prevent the user from manually labeling examples similar to the ones that the model can already firmly classify . by trying to maximize the additional information in each new training example this aspect is similar to _ active learning_/_selective sampling _ proposed in .+ there are some caveats to this approach : if , at the time of the training , there is no label information , the parametrization of the training process must rely on metrics like the reconstruction error .also , the samples generated by the model must be human - interpretable in order to perform the labeling .+ using the mnist data set of handwritten digits , we show that the post - labeling approach is competitive to a semi - supervised training scenario .semi - supervised training is a hybrid of supervised and unsupervised training . in unsupervised training settings ,we try to find interesting structures in the a set consisting of training examples without explicitly assigning classes or labels to those structures , e.g. for clustering or statistical density estimation . ina supervised setting , we are interested in finding a mapping of a variable to another variable in a training set consisting of example pairs , i.e. we are facing a classification task . in order to solve a supervised learning problem , it is possible to use discriminative or generative models . a discriminative model tries to directly learn the relationship between and , often by trying to directly estimate the conditional probability of a label given a data point .using a generative model , the approach is more related to the unsupervised case : by learning the structure of the data in , generative models try to estimate the class - conditional probability or the joint probability and retain the conditional probability using bayes rule . given a well - trained generative model , it is therefore often possible to draw samples from the model that resemble training data , as they come from the same probability distribution .recently , a class of generative models called restricted boltzmann machines has been widely used for discrimination tasks such as digit classification , phone recognition or document classification .restricted boltzmann machines are stochastic , energy - based neural network models .an rbm consists of a visible layer and a hidden layer , connected by weights from each visible neuron to each hidden neuron , forming a bipartite graph .they can be trained to model the joint distribution of the data , which is presented to the visible layer , and the hidden neurons by adjusting the weights and biases and .the neurons of the hidden layer are often referred to as _ feature detectors _ , as they tend to model features and patterns occurring in the data , thus capturing the structure in the training data .the probability that a hidden neuron is active depends on the activation of the visible units and the bias of the hidden neuron , thus , with being the logistic function .the probability that a visible unit is active given the hidden layer activations is , in turn , equal to , with being the bias of neuron .calculating and is therefore easy and efficient .+ the energy function defined on the rbm associates a scalar energy value for each configuration of visible neurons and hidden neurons .the probability of a joint configuration is proportional to its energy : it is now possible to marginalize over all hidden configurations to obtain the probability of a visible vector ( see for details ) . to train an rbm on a data set ,it is necessary to increase the probability (= lower the energy ) of training data vectors and decrease the probability of configurations that do not resemble training data .this can be done by updating the weights following the log likelihood gradient .it can be shown that this partial derivative is a sum of two terms usually referred to as the positive and negative gradient , which is why the training algorithm is called _ contrastive divergence _ ( cd ) .the resulting update rule for the weights is with being the expected value that and are active simultaneously .the first term ( positive gradient ) is calculated after initializing the visible layer with a data vector from the training set and subsequently activating given .the second term ( negative gradient ) is calculated when the model is running freely , that is after a potentially infinite number of gibbs sampling steps . as the negative gradient is intractable , it is often approximated using only steps of sampling after initializing the visible neurons with data ( cd - n ) . in practice , this approximation works pretty well ( see e.g. ) .+ to learn a labeled data set , we simply extend the visible layer to also capture label data ( e.g. a one - hot vector representing the label classes ) and add an extra set of label weights connecting the labels to the hidden neurons .the learning rule for the label weights and biases remains unchanged .figure [ model_ablauf ] compares the steps of the standard approach to train a classification rbm with the post - labeling approach pursued in this paper .the standard approach first collects training data and then manually applies labels to the data , or to a subset of the data .afterwards , a ( semi- ) supervised model is trained on labeled data , simultaneously learning both the regular weights , connecting the visible neurons to the features , and label weights , connecting the label neurons the features .+ with post - labeling , we change the order : after collecting data , we train an rbm in an unsupervised fashion on the unlabeled data , thus only updating the regular weights .afterwards , we let the model generate samples and apply labels to those samples .we then use the labeled samples to update the label weights in a supervised way . 0.2 in -0.2 in we used the mnist database of handwritten digits for our experiments .the data set contains 60,000 labeled training examples and 10,000 labeled test examples of 28 * 28 pixel images of handwritten digits in ten classes .when performing the semi - supervised or unsupervised learning tasks , we remove the labels .we perform the post - labeling tests on a restricted boltzmann machine with 784 ( = 28 * 28 ) visible neurons and 225 hidden neurons ( feature detectors ) . in order to validate the competitiveness of the post - labeling approach ,we compare it to an rbm of the same size - with the visible layer extended by label neurons - trained on labeled data in a supervised ( all data labeled ) or semi - supervised ( only a subset labeled ) fashion . during the initial training , the post - labeling rbm thus only has one set of weights , whereas the classic rbm has a second set of label weights . + we train both models networks using the training algorithm cd-10 and 50,000 images from the training set .the remaining 10,000 examples are held out in order to find feasible parameters ( such as the learning rate ) for the supervised model .] we use the reconstruction error ( sum of squared pixel - wise differences between data and one - step reconstruction ) to measure the training progress of the unsupervised model trained on data without labels .this is one of the main caveats of the post - labeling method : the reconstruction error can be misleading , especially when learning parameters are adapted during the training .also , the reconstruction errors between different learning algorithms can differ without giving a proper hint to model quality .the goal of the post - labeling phase is to find proper label weights . for this purpose, we developed a gui that shows samples from the model to a human labeler , who can activate the corresponding class using the keyboard or mouse ( see fig .[ screenshot_labeltrainer_gui ] ) .we initialize the visible layer with a randomly chosen ( unlabeled ) image from the training set and then let the model perform repeated gibbs sampling between the visible and the hidden layer of the underlying rbm .this leads to a slight deformation of the shown image in each sampling step , while the model traverses along a low - energy ravine in the energy landscape .if the model produces good reconstructions , the user observes slowly changing samples that belong to the same class , and potentially class transitions ( see figure [ samples_merged ] ) .the displayed image is constantly updated at a speed of approx .6 frames / second , which is adjustable in the gui .the user s task is to activate the corresponding class as soon as the observed image firmly resembles one of the classes .the selected class label stays active until the user presses the `` unsure '' button or another class button .this leads to a high number of labeled samples , as the display resembles a video of `` morphing '' digits .after 30 gibbs iterations , the visible neurons are initialized with the next random image from the training set . + 0.2 in -0.2 in 0.2 in ) .the first row shows a constantly changing eight ( which might transition into a three in one of the subsequent images ) , the second row shows a transition from a nine to a seven . ]-0.2 in there are two possibilities for training the label weights .the first is to perform online learning during the post - labeling phase .whenever a label is activated by the user , we update the label weights proportional to an approximation of the positive and the negative gradient at the same time . in this case , the positive gradient is , with being the class label activation ( as given by the user ) and the probability of the feature being active .the negative gradient approximation is with being the probability of the label being active ( as reconstructed by ) .thus , we strengthen connections from active features to the correct label and penalize connections from active features to the potentially wrong , reconstructed label .the biases are updated accordingly .we activate online learning in the gui by default .alternatively , it is possible to train the label weights in an offline fashion after the manual labeling of model samples in the gui .we save all frames labeled in the gui and used them to train the label weights using standard cd-1 .again the update for the label weights is proportional to .the only difference to the online learning is that we can cycle through the labeled training set multiple times , thus the negative gradient may change during the course of the training , resulting in a better approximation .the weights remain unchanged during the learning phase .it is possible to improve the ease of use of the labeling gui and the resulting labeling quality using a few tweaks .first , we can automatically control the speed of the image stream that is presented the user .after a few minutes of training , the model already assings a reasonably high probability to the correct class for `` common '' samples ( online learning is activated ) . on the contrary , if the current sample is visually distant from the previously labeled samples , the model does nt assign a high probability to any label - it is unsure which label to pick for this example .thus , it is possible to decrease the display speed for samples that seem unknown , thus allowing the user to make a more precise pick of the label ( especially on class transitions ) .we implemented this tweak in the gui as `` autospeed '' and activated it by default ( see fig .[ screenshot_labeltrainer_gui ] ) .+ analogously , it is possible to bias the choice of samples from the training set to initialize the image ( active sampling ) .if the probability for a label is very high ( % ) the gui can directly skip the example and try the next one .although this approach channels the user s attention to samples where the model is still unsure , it deprives the learning process of the chance to detect confident misclassifications .thus this technique should nt be used right away but only after some training .we implemented this `` do nt show if sure '' concept in the gui and asked users to activate it after the first five minutes of training .+ we also added the possibility to automatically undo the last five update steps if the user changes his opinion on a displayed image ( class changes and changes from a class to _ unsure _ ) .initial tests showed that when running on higher speeds , the reaction time of a user usually allows some wrong labels to slip in in case of a class transition or image degradation .+ if the reconstructions of the model are too stable to produce a constantly changing stream , it is possible to implement a set of `` fast weights '' as in .those fast weights can add a temporary penalty to the areas of low energy just visited , thus forcing the model to wander around .we did nt implement this tweak as of now .we test both the rbm trained with the standard ( semi- ) supervised approach as well as the post - labeling rbm using the mnist test set with 10,000 labeled images. + figure [ results_labeled ] shows the resulting test set error rate of the rbm trained using the standard approach .having only 500 of 50,000 images labeled results in a classification error of approx 14% .on increasing the number of labeled images , the error rate drops quickly and reaches its minimum of approx .4% on a fully labeled training set . + figure [ results_postlabeling_on_and_offline ] shows the test set error of the rbm trained using the post - labeling approach .both online learning and offline learning results show high initial error rates and a fast drop on increasing gui time .however , the classification error of epoch - wise offline learning is constantly smaller .it reaches a performance of around 6.2% error after 4200 seconds of labeling model samples .+ although our goal is to compare ( semi- ) supervised and post - labeling approach , we do not plot the results in a single figure because they do not share a common x axis . in order to compare the results, we have to make an estimation on the time required to label static images .test showed that 1.5 - 2 seconds per labeled image is a realistic labeling rate . given this labeling rate , the standard and the post - labeling approach show similar error rates given the labeling time .when spending 2,000 seconds on labeling , both approaches show a test set error around 8% .accordingly , the error rates for 4,000 seconds labeling time are around 6.5% . + 0.2 in -0.2 in 0.2 in -0.2 in the results shown aboveare biased in two ways .first , our initial parameter choice for the the unsupervised model was influenced by our background knowledge from previous supervised tests with the mnist data set . on a genuinely new training set, we would nt possess such knowledge and would have to rely on the reconstruction error only ( see section [ models ] ) . on the other hand ,our results are biased by the fact that we use the labels of the official test set , which almost certainly come from a different distribution than the ones given by our labelers during training ( consider the ambiguity of sevens and ones or fours and nines , given the cultural background ) .if all labels ( test and training ) origin from the same distribution , the test error rate will most probably be lower .the displayed results of the supervised model can profit from this fact , as opposed to the results of the post - labeling model . + it is not known whether the mnist labels were double - checked in order to get error - free labels ( at least for the non - ambiguous cases ) . if there is more than one labeling pass , the required time increases accordingly in the standard approach .the results show that the post - labeling approach is , in gereral , competitive to the standard approach in terms of the resulting classification quality .it is likely that , by following the low - energy ravines , the model displays samples that resemble a class , but are not part of the training data .these samples can then be labeled by the gui user .+ on the other hand , the post - labeling approach has a number of drawbacks . as mentioned above , the initial unsupervised training must rely on metrics such as the reconstruction error . also , the quality of the labeled model samples is not as high as the quality of labeled real - world examples .as the displayed image is constantly changing , there are almost certainly some mislabeled or low - quality samples .nevertheless it should be possible to use the labeled samples as a whole to train the label weights of a different model than the one they originated from , as most of them genuineley represent the classes .+ another drawback of this approach is that it is crucial to have meaningful reconstructions of the original input .they have to be clearly distinguishable from one another by a human observer , and more or less stable on repeated gibbs sampling .especially when dealing with real - world ( and thus real - valued ) images , this sets a high standard for the unsupervised model .the approach is , however , independent of the model type and can , e.g. , be used with higher - order boltzmann machines to model covariances in the dataset to better model real - world images .+ the approach can , in principle , be combined with classical semi - supervised learning , e.g. by initializing the label learning procedure with some labeled images in the training set or to get a better understanding of parameter settings by using a small labeled validation set .we proposed a different approach for training a classification model . using the mnist set of handwritten digits, we showed that it is feasible to train an rbm on unlabeled data first and subsequently label model samples using a gui .this approach presents an alternative to semi - supervised learning , but does not reach the classification performance of a model trained on fully labeled data given the tested labeling times .an interesting question for further research is whether it is possible to also improve the model quality with respect to the data using the post - labeling gui .that is , to capture user input during the interactive learning phase ( such as `` i see only noise '' ) to improve the quality of the weights connecting the visible and the hidden neurons . geoffrey hinton and ruslan salakhutdinov .discovering binary codes for documents by learning deep generative models ._ topics in cognitive science _ , 30 ( 1):0 7491 , 2011 .issn 1756 - 8765 .doi : 10.1111/j.1756 - 8765.2010.01109.x .url http://dx.doi.org/10.1111/j.1756-8765.2010.01109.x .p. smolensky ._ parallel distributed processing : explorations in the microstructure of cognition _ , volume 1 , chapter information processing in dynamical systems : foundations of harmony theory , pages 194281 . mit press , cambridge , ma , usa , 1986 .yuandong tian , wei liu , rong xiao , fang wen , and xiaoou tang .a face annotation framework with partial clustering and interactive labeling . in _ ieee conference on computer vision and pattern recognition _ , 2010 .tijmen tieleman .training restricted boltzmann machines using approximations to the likelihood gradient . in _ proceedings of the 25th international conference on machine learning _ , icml 08 , pages 10641071 , new york , ny , usa , 2008 .isbn 978 - 1 - 60558 - 205 - 4 .doi : http://doi.acm.org/10.1145/1390156.1390290 .url http://doi.acm.org/10.1145/1390156.1390290 .tijmen tieleman and geoffrey hinton .using fast weights to improve persistent contrastive divergence . in _ proceedings of the 26th annual international conference on machine learning _ , icml 09 , pages 10331040 , new york , ny , usa , 2009 .isbn 978 - 1 - 60558 - 516 - 1 .doi : http://doi.acm.org/10.1145/1553374.1553506 .url http://doi.acm.org/10.1145/1553374.1553506 .
|
we propose an alternative method for training a classification model . using the mnist set of handwritten digits and restricted boltzmann machines , it is possible to reach a classification performance competitive to semi - supervised learning if we first train a model in an unsupervised fashion on unlabeled data only , and then manually add labels to model samples instead of training data samples with the help of a gui . this approach can benefit from the fact that model samples can be presented to the human labeler in a video - like fashion , resulting in a higher number of labeled examples . also , after some initial training , hard - to - classify examples can be distinguished from easy ones automatically , saving manual work . * technical report in information systems + and business administration * + ( 1,0)380 * johannes gutenberg - university mainz * + department of information systems and business administration + d-55128 mainz / germany + phone + 49 6131 39 22734 , fax + 49 6131 39 22185 + e - mail : sekretariat[at]wi.bwl.uni - mainz.de + internet : http://wi.bwl.uni-mainz.de +
|
malaria is a life threatening disease caused by _ plasmodium _ parasites and transmitted from one individual to another by the bite of infected female anopheline mosquitoes . in the human body , the parasites multiply in the liver , and then infect red blood cells . following world health organization ( who ) 2012 report , an estimated 3.3 billion people were at risk of malaria in 2011 , with populations living in sub - saharan africa having the highest risk of acquiring malaria .malaria is an entirely preventable and treatable disease , provided the currently recommended interventions are properly implemented . following who , these interventions include ( i ) vector control through the use of insecticide - treated nets ( itns ) , indoor residual spraying and , in some specific settings , larval control , ( ii ) chemoprevention for the most vulnerable populations , particularly pregnant women and infants , ( iii ) confirmation of malaria diagnosis through microscopy or rapid diagnostic tests for every suspected case , and ( iv ) timely treatment with appropriate antimalarial medicines .an itn is a mosquito net that repels , disables and/or kills mosquitoes coming into contact with insecticide on the netting material .itns are considered one of the most effective interventions against malaria . in 2007 , who recommended full itn coverage of all people at risk of malaria , even in high - transmission settings . by 2011 , 32 countries in the african region and 78 other countries worldwide , had adopted the who recommendation .a total of 89 countries , including 39 in africa , distribute itns free of charge . between 2004 and 2010 ,the number of itns delivered annually by manufacturers to malaria - endemic countries in sub - saharan africa increased from 6 million to 145 million .however , the numbers delivered in 2011 and 2012 are below the number of itns required to protect all population at risk .there is an urgent need to identify new funding sources to maintain and expand coverage levels of interventions so that outbreaks of disease can be avoided and international targets for reducing malaria cases and deaths can be attained . a number of studies reported that itn possession does not necessarily translate into use .human behavior change interventions , including information , education , communication ( iec ) campaigns and post - distribution hang - up campaigns are strongly recommended , especially where there is evidence of their effectiveness in improving itn usage . in this paperwe consider the model from for the effects of itns on the transmission dynamics of malaria infection .other articles considered the impact of intervention strategies using itn ( see , e.g. , ) .however , only in the human behavior is incorporated into the model .we introduce in the model of a _ supervision _ control , , which represents iec campaigns for improving the itn usage .the reader interested in the use of optimal control to infectious diseases is referred to and references cited therein . for the state of art in malaria researchsee .the text is organized as follows . in section [ sec : cont : model ] we present the mathematical model for malaria transmission with one control function . in section [ sec : ocp ] we propose an optimal control problem for the minimization of the number of infectious humans while controlling the cost of control interventions .finally , in section [ sec : num : simu ] some numerical results are analyzed and interpreted from the epidemiological point of view .we consider a mathematical model presented in for the effects of itn on the transmission of malaria infection and introduce a time - dependent _ supervision _ control .the model considers transmission of malaria infection of mosquito ( also referred as vector ) and human ( also referred as host ) population .the host population is divided into two compartments , susceptible ( ) and infectious ( ) , with a total population ( ) given by .analogously , the vector population is divided into two compartments , susceptible ( ) and infectious ( ) , with a total population ( ) given by .the model is constructed under the following assumptions : all newborns individuals are assumed to be susceptible and no infected individuals are assumed to come from outside the community .the human and mosquito recruitment rates are denoted by and , respectively .the disease is fast progressing , thus the exposed stage is minimal and is not considered .infectious individuals can die from the disease or become susceptible after recovery while the mosquito population does not recover from infection .itns contribute for the mortality of mosquitoes .the average number of bites per mosquito , per unit of time ( mosquito - human contact rate ) , is given by where denotes the maximum transmission rate and the proportion of itn usage .it is assumed that the minimum transmission rate is zero .the value of is the same for human and mosquito population , so the average number of bites per human per unit of time is ( see and the references cited therein ) .thus , the force of infection for susceptible humans ( ) and susceptible vectors ( ) are given by where and are the transmission probability per bite from infectious mosquitoes to humans , and from infectious humans to mosquitoes , respectively .the death rate of the mosquitoes is modeled by , where is the natural death rate and is the death rate due to pesticide on itns .the coefficient represents the effort of susceptible humans that become infected by infectious mosquitoes bites , such as educational programs / campaigns for the correct use of itns , supervision teams that visit every house in a certain region and assure that every person has access to an itn , know how to use it correctly , and recognize its importance on the reduction of malaria disease transmission .the values of the parameters , , , , , , , , and are taken from ( see table [ table : parameters ] ) .the state system of the controlled malaria model is given by \dot{i}_h(t ) = ( 1-u(t ) ) \lambda_h s_h(t ) - ( \mu_h + \gamma_h + \delta_h)i_h(t ) \, , \\[0.2 cm ] \dot{s}_v(t ) = \lambda_v - \lambda_v s_v(t ) - \mu_{vb } s_v(t ) \ , , \\[0.2 cm ] \dot{i}_v(t ) = p_2 \lambda_v s_v(t ) - \mu_{vb } i_v(t ) \ , .\end{cases}\ ] ] .parameter values . [ cols= "< , < , < " , ] the rate of change of the total human and mosquito populations is given by formulate an optimal control problem that describes the goal and restrictions of the epidemic . in is found that the itn usage must attain 75% ( ) of the host population in order to extinct malaria .therefore , educational campaigns must continue encouraging the population to use itns . moreover , it is very important to assure that itns are in good conditions and each individual knows how to use them properly . having this in mind, we introduce a _ supervision _ control function , , where the coefficient represents the effort to reduce the number of susceptible humans that become infected by infectious mosquitoes bites , assuring that itns are correctly used by the fraction of the host population .we consider the state system of ordinary differential equations in with the set of admissible control functions given by \right\ } \ , .\ ] ] the objective functional is given by where the weight coefficient , , is a measure of the relative cost of the interventions associated to the control and is the weight coefficient for the class .the aim is to minimize the infectious humans while keeping the cost low .more precisely , we propose the optimal control problem of determining associated to an admissible control on the time interval ] using a forward fourth - order runge kutta scheme and the transversality conditions in appendix .then , system is solved by a backward fourth - order runge kutta scheme using the current iteration solution of .the controls are updated by using a convex combination of the previous controls and the values from ( see appendix ) .the iterative method ends when the values of the approximations at the previous iteration are close to the ones at the present iteration . for detailssee .first of all we consider and show that when we apply the _ supervision _control , better results are obtained , that is , the number of infected humans vanishes faster when compared to the case where no controls are used .if the control intervention is applied , then the number of infectious individuals vanishes after approximately 30 days .if no control is considered , then it takes approximately 70 days to assure that there are no infectious humans ( see figure [ fig : sh : ih : b075 ] for the fraction of susceptible and infectious humans and figure [ control : b075 ] for the optimal control ) . for .,scaledwidth=50.0% ] for smaller proportions of itn usage than , similar results on the reduction of infectious humans are attained when we consider the optimal _ supervision _ control ( see figures [ fig : sh : ih : bvariable ] and [ control : bvariable ] ) .we note that the control does not contribute significantly for the decrease of ( see figure [ fig : sv : iv : b075 ] ) . for .,scaledwidth=50.0% ]according to the pontryagin maximum principle , if is optimal for the problem , with the initial conditions given in table [ table : parameters ] and fixed final time , then there exists a nontrivial absolutely continuous mapping \to \mathbb{r}^4 ]. moreover , the transversality conditions hold . [ the : thm ] problem , with fixed initial conditions , , and and fixed final time , admits an unique optimal solution associated to an optimal control on $ ] .moreover , there exists adjoint functions , , and such that \dot{\lambda^*_2}(t ) = - a_1 - \lambda_1^*(t ) \gamma_h + \lambda_2^*(t)(\mu_h + \gamma_h + \delta_h)\\[0.1 cm ] \dot{\lambda^*_3}(t ) = \lambda_3^*(t)(\lambda_v + \mu_{vb } ) - \lambda_4^*(t)(\lambda_v ) ) \\[0.1 cm ] \dot{\lambda^*_4}(t ) = \lambda_4^*(t ) \mu_{vb } \, , \end{cases}\ ] ] with transversality conditions furthermore , existence of an optimal solution associated to an optimal control comes from the convexity of the integrand of the cost function with respect to the control and the lipschitz property of the state system with respect to state variables ( see , , ) .system is derived from the pontryagin maximum principle ( see , ) and the optimal controls come from the minimization condition . the optimal control pair given by is unique due to the boundedness of the state and adjoint functions and the lipschitz property of systems and ( see , e.g. , and references cited therein ) .this work was supported by feder funds through compete operational programme factors of competitiveness ( `` programa operacional factores de competitividade '' ) and by portuguese funds through the portuguese foundation for science and technology ( `` fct fundao para a cincia e a tecnologia '' ) , within project pest - c / mat / ui4106/2011 with compete number fcomp-01 - 0124-feder-022690 .silva was also supported by fct through the post - doc fellowship sfrh / bpd/72061/2010/j003420e03 g ; torres by eu funding under the 7th framework programme fp7-people-2010-itn , grant agreement number 264735-sadco .b. m. afolabi , o. t. sofola , b. s. fatunmbi , w. komakech , f. okoh , o. saliu , p. otsemobor , o. b. oresanya , c. n. amajoh , d. fasiku , i. jalingo , _ household possession , use and non - use of treated or untreated mosquito nets in two ecologically diverge regions of nigeria niger delta and sahel savannah _ , malar . j. ( 2009 ) , 830 .g. f. killeen , t. a. smith , _ exploring the contributions of bed - nets , cattle , insecticides and excitorepellency to malaria control : a deterministic model of mosquito host - seeking bahavior and mortality _ , trans( 2007 ) , no . 9 , 867880 .k. macintyre , j. keating , y. b. okbaldt , m. zerom , s. sosler , t. ghebremeskel , t. b. eisele , _ rolling out insecticide treated nets in eritrea : examining the determinants of possession and use in malarious zones during the rainy season _health 11 ( 2006 ) , 824833 .c. j. silva , d. f. m. torres , optimal control for a tuberculosis model with reinfection and post - exposure interventions , math .. http://dx.doi.org/10.1016/j.mbs.2013.05.005 arxiv:1305.2145t. smith , n. maire , a. ross , m. penny , n. chitnis , a. schapira , a. studer , b. genton , c. lengeler , f. tediosi , d. de savigny , m. tanner , _ towards a comprehensive simulation model of malaria epidemiology and control _ , parasitology 135 ( 2008 ) , no . 13 , 15071516 .
|
malaria is a life threatening disease , entirely preventable and treatable , provided the currently recommended interventions are properly implemented . these interventions include vector control through the use of insecticide - treated nets ( itns ) . however , itn possession does not necessarily translate into use . human behavior change interventions , including information , education , communication ( iec ) campaigns and post - distribution hang - up campaigns are strongly recommended . in this paper we consider a recent mathematical model for the effects of itns on the transmission dynamics of malaria infection , which takes into account the human behavior . we introduce in this model a _ supervision _ control , representing iec campaigns for improving the itn usage . we propose and solve an optimal control problem where the aim is to minimize the number of infectious humans while keeping the cost low . numerical results are provided , which show the effectiveness of the optimal control interventions . [ [ keywords ] ] keywords : + + + + + + + + + optimal control , malaria , insecticide - treated nets . [ [ mathematics - subject - classification-2010 ] ] mathematics subject classification 2010 : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 92d30 ; 49m05 .
|
in recent years , it has become clear that certain interacting particle systems studied in combinatorics and statistical physics have a common underlying structure .these systems are characterized by an _ abelian property _ which says changing the order of certain interactions has no effect on the final state of the system . up to this point, the tools used to study these systems least action principle , local - to - global principles , burning algorithm , transition monoids and critical groups have been developed piecemeal for each particular system .following dhar , we aim to identify explicitly what these various systems have in common and exhibit them as special cases of what we call an _abelian network_. intuition suggests that noncommutativity is a major source of dynamical richness and complexity . yetabelian networks produce surprisingly rich and intricate large - scale patterns from local rules . just as in physics oneinfers from macroscopic observations the properties of microscopic particles , we would like to be able to infer from the large - scale behavior of a cellular automaton something about the local rules that generate that behavior . in particular , are there certain large - scale features that can only be produced by _ noncommutative _ local interactions ? bynow a lot is known about the computational complexity of the abelian sandpile model ; see for a recent compilation .the requirement that a distributed network produce the same output regardless of the order in which processors act would seem to place a severe restriction on the kinds of tasks it can perform . yetabelian networks can perform some highly nontrivial tasks , such as solving certain linear and nonlinear integer programs ( see [ s.mip ] ) .are there other computational tasks that _ require _ noncommutativity ? in this paper and its sequels , by defining abelian networks and exploring their fundamental properties , we hope to take a step toward making these questions precise and eventually answering them . after giving the formal definition of an abelian network in [ s.definition ] ,we survey a number of examples in [ s.examples ] . these include the well - studied sandpile and rotor networks as well as two _ non - unary _ examples : oil and water , and abelian mobile agents . in [ s.leastaction ]we prove a least action principle for abelian networks and explore some of its consequences .one consequence is that `` local abelianness implies global abelianness '' ( lemma [ l.localglobalabelian ] ) .another is that abelian networks solve optimization problems of the following form : given a nondecreasing function , find the coordinatewise smallest vector such that ( if it exists ) .this paper is the first in a series of three . in the sequel we give conditions for a finite abelian network to halt on all inputs .such a network has a natural invariant attached to it , the _ critical group _ ,whose structure we investigate in .this section begins with the formal definition of an abelian network , which is based on deepak dhar s model of _ abelian distributed processors _ .the term `` abelian network '' is convenient when one wants to refer to a collection of communicating processors as a single entity .some readers may wish to look at the examples in [ s.examples ] before reading this section in detail .let be a directed graph , which may have self - loops and multiple edges .associated to each vertex is a _ processor _ , which is an automaton with a single input feed and multiple output feeds , one for each edge .each processor reads the letters in its input feed in first - in - first - out order . the processor has an input alphabet and state space .its behavior is governed by a _ transition function _ and _ message passing functions _ associated to each edge .formally , these are maps where denotes the free monoid of all finite words in the alphabet .we interpret these functions as follows .if the processor is in state and processes input , then two things happen : 1 .processor transitions to state ; and 2 . for each edge , processor receives input .if more than one has inputs to process , then changing the order in which processors act may change the order of messages arriving at other processors . concerning this issue, dhar writes that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` in many applications , especially in computer science , one considers such networks where the speed of the individual processors is unknown , and where the final state and outputs generated should not depend on these speeds .then it is essential to construct protocols for processing such that the final result does not depend on the order at which messages arrive at a processor . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ : 1 . the * halting status * ( i.e. , whether or not processing eventually stops ) .the * final output * ( final states of the processors ) .3 . the * run time * ( total number of letters processed by all ) .the * local run times * ( number of letters processed by a given ) .the * specific local run times * ( number of times a given processes a given letter ) . a priori it is not obvious that these goals are actually achievable by any nontrivial network . in [ s.leastaction ]we will see , however , that a simple local commutativity condition ensures all five goals are achieved . to state this condition, we extend the domain of and to : if is a word in alphabet beginning with , then set and , where the product denotes concatenation of words .let be the free commutative monoid generated by , and write for the natural map , so that for denotes the number of letters in the word .[ d.abelianprocessor ] ( abelian processor ) the processor is called _ abelian _ if for any words such that , we have for all and all edges that is , permuting the letters input to does not change the resulting state of the processor , and may change each output word sent to only by permuting its letters .[ d.abeliannetwork ] ( abelian network ) an _ abelian network _ on a directed graph is a collection of automata indexed by the vertices of , such that each is abelian .we make a few remarks about the definition : \1 .the definition of an abelian network is _ local _ in the sense that it involves checking a condition on each processor individually .as we will see , these local conditions imply a `` global '' abelian property ( lemma [ l.localglobalabelian ] ) .\2 . a processor is called _ unary _ if its alphabet has cardinality .a unary processor is trivially abelian , and any network of unary processors is an abelian network .most of the examples of abelian networks studied so far are actually unary networks ( an exception is the block - renormalized sandpile defined in ) .non - unary networks represent an interesting realm for future study . the `` oil and water model '' defined in [ s.oilwater ] is an example of an abelian network that is not a block - renormalized unary network .cellular automata are traditionally studied on the grid or on other lattices , but they may be defined on any directed graph . indeed , we would like to suggest that the study of cellular automata on could be a fruitful means of revealing interesting graph - theoretic properties of .abelian networks may be viewed as cellular automata enjoying the following two properties .abelian networks can update asynchronously . *traditional cellular automata update in parallel : at each time step , all cells _ simultaneously _ update their states based on the states of their neighbors . since perfect simultaneity is hard to achieve in practice , the physical significance of parallel updating cellular automata is open to debate .abelian networks do not require the kind of central control over timing needed to enforce simultaneous parallel updates , because they reach the same final state no matter in what order the updates occur .abelian networks do not rely on shared memory . * implicit in the update rule of cellular automata is an unspecified mechanism by which each cell is kept informed of the states of its neighbors .the lower - level interactions needed to facilitate this exchange of information in a physical implementation are absent from the model .abelian networks include these interactions by operating in a `` message passing '' framework instead of the `` shared memory '' framework of cellular automata : an individual processor in an abelian network can not access the states of neighboring processors , it can only read the messages they send .figure [ fig : venn ] shows increasingly general classes of abelian networks .the oldest and most studied is the _abelian sandpile model _ , also called _ chip - firing _ . given a directed graph , the processor at each vertex has input alphabet and state space , where is the outdegree of .the transition function is ( formally we should write , but when we omit the redundant first argument . ) the message passing functions are for each edge . here denotes the empty word .thus each time the processor at vertex transitions from state to state , it sends one letter to each of its out - neighbors ( figure [ f.sandpile ] ) .when this happens we say that vertex _ topples _ ( or `` fires '' ) . showing a vertex and its outneighborsmiddle : state diagram for in a sandpile network .dots represent states , arrows represent transitions when a letter is processed , and dashed vertical lines indicate when letters are sent to the neighbors .bottom : state diagram for in a toppling network with .,title="fig : " ] + showing a vertex and its outneighbors .middle : state diagram for in a sandpile network .dots represent states , arrows represent transitions when a letter is processed , and dashed vertical lines indicate when letters are sent to the neighbors .bottom : state diagram for in a toppling network with .,title="fig : " ] + showing a vertex and its outneighbors .middle : state diagram for in a sandpile network .dots represent states , arrows represent transitions when a letter is processed , and dashed vertical lines indicate when letters are sent to the neighbors .bottom : state diagram for in a toppling network with .,title="fig : " ] these have the same transition and message passing functions as the sandpile networks above , but we allow the number of states ( called the _ threshold _ of vertex ) to be different from the outdegree of . these networks can be concretely realized in terms of `` chips '' : if a vertex in state has letters in its input feed , then we say that there are chips at that vertex .when has at least chips , it can _ topple _ , losing chips and sending one chip along each outgoing edge . in a sandpile network the total number of chips is conserved , but in a toppling network , chips may be created ( if is less than the outdegree of , as in the last diagram of figure [ f.sandpile ] ) or destroyed ( if is larger than the outdegree of ) . note that some chips are `` latent '' in the sense that they are encoded by the internal states of the processors .for example if a vertex with is in state , receives one chip and processes it , then the letter representing that chip is gone , but the internal state increases to representing a latent chip at .if receives another chip and processes it , then its state returns to and it topples by sending one letter to each out - neighbor . it is convenient to specify a toppling network by its _ laplacian _ , which is the matrix with diagonal entries and off - diagonal entries . here is the number of edges from to in the graph .sometimes it is useful to consider toppling networks where the number of chips at a vertex may become negative .we can model this by enlarging the state space of each processor to include ; these additional states have transition function and send no messages . in [ s.mip ] we will see that these enlarged toppling networks solve certain integer programs . it is common to consider the sandpile network with a _ sink _ , a vertex whose processor has only one state and never sends any messages .if every vertex of has a directed path to the sink , then any finite input to will produce only finitely many topplings .the set of _ recurrent states _ of a sandpile network with sink is in bijection with objects of interest in combinatorics such as oriented spanning trees and -parking functions .recurrent states of more general abelian networks are defined and studied in the sequel paper .[ s.counter ] a _ counter _ is a unary processor with state space and transition , which never sends any messages .it behaves like a sink , but keeps track of how many letters it has received . in this simple model of crack formation, each vertex has a threshold .vertex becomes `` infected '' as soon as at least of its in - neighbors are infected .infected vertices remain infected forever .a question that has received a lot of attention due to its subtle scaling behavior is : what is the probability the entire graph becomes infected , if each vertex independently starts infected with probability ? to realize bootstrap percolation as an abelian network , we take and , with and the internal state of the processor keeps track of how many in - neighbors of are infected .when this count reaches the threshold , sends a letter to each out - neighbor of informing them that is now infected .a _ rotor _ is a unary processor that outputs exactly one letter for each letter input .that is , for all inputting a single letter into a network of rotors yields an infinite walk , where vertex is the location of the single letter present after processings .this _ rotor walk _ has been studied under various names : in computer science it was introduced as a model of autonomous agents exploring a territory ( `` ant walk , '' ) and later studied as a means of broadcasting information through a network . in statistical physicsit was proposed as a model of self - organized criticality ( `` eulerian walkers , '' ) .propp proposed rotor walk as a way of derandomizing certain features of random walk . in a simple rotor network .the out - neighbors of are served repeatedly in a fixed order . ]most commonly studied are the _ simple _ rotor networks on a directed graph , in which the out - neighbors of vertex are served repeatedly in a fixed order ( figure [ f.rotor ] ) .formally , we set , with transition function and message passing functions enlarge each state space of a simple rotor network to include a transient state , which transitions to state but passes no message . starting with all processors in state , the effect is that each vertex `` absorbs '' the first letter it receives , and behaves like a rotor thereafter .if we input letters to one vertex , then each letter performs a rotor walk starting from until reaching a site that has not yet been visited by any previous walk , where it gets absorbed .propp proposed this model as a way of derandomizing a certain random growth process ( internal dla ) .when the underlying graph is the square grid , the resulting set of visited sites is very close to circular , and the final states of the processors display intricate patterns that are still not well understood . in the dartois - rossin height arrow model ( top ) anderiksson s periodically mutating game ( bottom).,title="fig : " ] + in the dartois - rossin height arrow model ( top ) and eriksson s periodically mutating game ( bottom).,title="fig : " ] dartois and rossin proposed a common generalization of rotor and sandpile networks called the _ height - arrow model_. diaconis and fulton and eriksson studied generalizations of chip - firing in which each vertex has a stack of instructions .when a vertex accumulates enough chips to follow the top instruction in its stack , it pops that instruction off the stack and follows it .these and all preceding examples are _ unary networks _ , that is , abelian networks in which each alphabet has cardinality .informally , a unary network on a graph is a system of local rules by which _indistinguishable _ chips move around on the vertices of .next we discuss two non - unary examples . in a network whose underlying graphis a path or a cycle .when an agent arrives at vertex , she transitions the state of ( upward if her own state is red , rightward if blue ) ; updates her own state to red or blue according to the color of the line crossed ; and moves to or according to whether the line is solid or dashed . in this example , and the boxes indicate periodicity in the diagram.,scaledwidth=60.0% ] in the spirit of , one could replace the messages in our definition of abelian networks by mobile agents each of which is an automaton . as a function of its own internal state and the state of the vertex it currently occupies, an agent acts by doing three things : 1 .it changes its own state to ; and 2 .it changes the state of to ; and 3 .it moves to a neighboring vertex .two or more agents may occupy the same vertex , in which case we require that the outcome of their actions is the same regardless of the order in which they act . for purposes of deciding whether two outcomes are the same, we regard agents with the same internal state and location as indistinguishable .this model may appear to lie outside our framework of abelian networks , because the computation is located in the moving agents ( who carry their internal states with them ) instead of in the static processors .however , a moment s thought shows that it has identical behavior to the abelian network with transition function and message passing function abelian mobile agents generalize the rotor networks ( [ s.rotor ] ) by dropping the requirement that processors be unary .the defining property of abelian mobile agents is that each processor sends exactly one letter for each letter received . in figure [ f.agents ]this property is apparent from the fact that each segment of the square grid lies on exactly one message line .this is a non - unary generalization of sandpiles , inspired by paul tseng s asynchronous algorithm for solving certain linear programs .each edge of is marked either as an oil edge or a water edge .when a vertex topples , it sends out one oil chip along each outgoing oil edge and also one water chip along each outgoing water edge .the interaction between oil and water is that a vertex is permitted to topple if and only if sufficiently many chips of _ both _ types are present at that vertex . unlike most of the preceding examples ,oil and water can not be realized with a finite state space , because an arbitrary number of oil chips could accumulate at and be unable to topple if no water chips are present .we set and , with transition function the internal state of the processor at is a vector keeping track of the total number chips of each type it has received ( figure [ f.oilwater ] ) .stochastic versions of the oil and water model are studied in .has outgoing oil edges to and , and water edges to and .bottom : each dot represents a state in , with the origin at lower left .a toppling occurs each time the state transition crosses one of the bent lines ( for example , by processing an oil in state , resulting in transition to state ).,title="fig : " ] + has outgoing oil edges to and , and water edges to and .bottom : each dot represents a state in , with the origin at lower left .a toppling occurs each time the state transition crosses one of the bent lines ( for example , by processing an oil in state , resulting in transition to state ).,title="fig : " ] in a stochastic abelian network , we allow the transition functions to depend on a probability space : a variety of models in statistical mechanics including classical markov chains and branching processes , branching random walk , certain directed edge - reinforced walks , internal dla , the oslo model , the abelian manna model , excited walk , the kesten - sidoravicius infection model , two - component sandpiles and related models derived from abelian algebras , activated random walkers , stochastic sandpiles , and low - discrepancy random stack can all be realized as stochastic abelian networks . in at least one case the abelian nature of the model enabled a major breakthrough in proving the existence of a phase transition .stochastic abelian networks are beyond the scope of the present paper and will be treated in a sequel .our first aim is to prove a least action principle for abelian networks , lemma [ l.leastaction ] . this principle says in a sense to be made precise that each processor in an abelian network performs the minimum amount of work possible to remove all messages from the network .various special cases of the least action principle to particular abelian networks have enabled a flurry of recent progress : bounds on the growth rate of sandpiles , exact shape theorems for rotor aggregation , proof of a phase transition for activated random walkers , and a fast simulation algorithm for growth models .the least action principle was also the starting point for the recent breakthrough by pegden and smart showing existence of the abelian sandpile scaling limit .the proof of the least action principle follows diaconis and fulton ( * ? ? ?* theorem 4.1 ) .our observation is that their proof actually shows something more general : it applies to any abelian network . moreover , as noted in , the proof applies even to executions that are complete but not legal . to explain the last point requires a few definitions .let be an abelian network with underlying graph , total state space and total alphabet . in this section we do not place any finiteness restrictions on : the underlying graph may be finite or infinite , and the state space and alphabet of each processor may be finite or infinite . we may view the entire network as a single automaton with alphabet and state space . for its states we will use the notation , where and .if the state corresponds to the configuration of the network such that * for each , there are letters of type present ; and * for each , the processor at vertex is in state . allowing to have negative coordinatesis a useful device that enables the least action principle ( lemma [ l.leastaction ] below ) .formally , is just an alternative notation for the ordered pair . the decimal point in intended to evoke the intuition that the internal states of the processors represent latent `` fractional '' messages .note that indicates only the states of the processors and the _ number _ of letters present of each type .it gives no information about the order in which letters are to be processed .indeed , one of our goals is to show that the order does not matter ( theorem [ t.wishlist ] ) . for and , denote by the map where is the transition function of vertex ( defined in [ s.definition ] ) .we define the state transition by where is if and otherwise ; and is the number of s produced when processor in state processes the letter . in other words , where is the message passing function of edge , and the sum is over all outgoing edges from ( both sides are vectors in ) . having defined for letters , we define for a word as the composition . to generalize equation, we extend the domain of to as follows .let and let where is the unique vertex such that .note that if and for , then since acts by identity on and acts by identity on .recall that and is the number of occurrences of letter in the word . from the definition of we have by induction on where . in the next lemma and throughout this paper , inequalities on vectorsare coordinatewise .[ l.monotonicity ] _( monotonicity ) _ for and , if , then . fora vertex let be the projection defined by for and ( the empty word ) for .equation implies that so it suffices to prove the lemma for for each .fix and with .then there is a word such that . given a letter , if then . if , then since is an abelian processor , the first term on the right side equals , and the remaining term is nonnegative , completing the proof . [ l.piscommute ] for , if , then .suppose .then for any we have by lemma [ l.monotonicity ] .since and commute for all , we have .hence the right side of is unchanged by substituting for .an _ execution _ is a word .it prescribes an order in which letters in the network are to be processed . for simplicity, we consider only finite executions in the present paper , but we remark that infinite executions ( and non - sequential execution procedures ) are also of interest .fix an initial state and an execution , where each .set and the _ result _ of executing is .our goal is to compare the results of different executions .the letter is called a _ legal move _ from if . an execution called _ legal _ for if is a legal move from for each .an execution is called _ complete _ for if for all .[ l.leastaction ] _ ( least action principle ) _ if is legal for and is complete for , then and . noting that and , it suffices to prove .supposing for a contradiction that , let be the smallest index such that .let and .then , and for all .since is a legal move from , we have by and lemma [ l.monotonicity ] since is complete , the right side is by , which yields the required contradiction .[ l.dichotomy ] _ ( halting dichotomy ) _ for a given initial state and input to an abelian network , either 1 .there does not exist a finite complete execution for ; or 2 .every legal execution for is finite , and any two complete legal executions for satisfy .if there exists a finite complete execution , say of length , then every legal execution has length by lemma [ l.leastaction ] . if and are complete legal executions , then by lemma [ l.leastaction ]. note that in case ( 1 ) any finite legal execution can be extended by a legal move : since is not complete , there is some letter such that is legal .so in this case there is an infinite word such that is a legal execution for all .the _ halting problem _ for abelian networks asks , given , and , whether ( 1 ) or ( 2 ) of lemma [ l.dichotomy ] is the case . in case ( 2 ) we say that _ halts _ on input . in the sequel we characterize the finite abelian networks that halt on all inputs .[ d.odometer ] ( odometer ) if halts on input , we denote by = |w|_a ] is called the _ odometer _ of . by lemma [ l.dichotomy ] ,the odometer does not depend on the choice of complete legal execution .no messages remain at the end of a complete legal execution , so the network ends in state .hence by , the odometer can be written as = |w| = \xx + \mathbf{n}(w,\qq)\ ] ] which simply says that the total number of letters processed ( of each type ) is the sum of the letters input and the letters produced by message passing .the coordinates of the odometer are the `` specific local run times '' from [ s.definition ] .we can summarize our progress so far in the following theorem .[ t.wishlist ] abelian networks have properties from [ s.definition ] . by lemma [ l.dichotomy ] the halting status does not depend on the execution , which verifies item ( a ) .moreover for a given any two complete legal executions have the same odometer , which verifies items ( c)(e ) .the odometer and initial state determine the final state , which verifies ( b ) .the next lemma illustrates a general theme of _ local - to - global principles _ in abelian networks .suppose we are given a partition of the vertex set into `` interior '' and `` output '' nodes , and that the output nodes never send messages ( for example , the processor at each output node could be a counter , [ s.counter ] ) .we allow the possibility that is empty .if halts on all inputs , then we can regard the induced subnetwork of interior nodes as a single processor with input alphabet , state space , and an output feed for each edge .[ l.localglobalabelian ] _ ( local abelianness implies global abelianness ) _if halts on all inputs and is an abelian processor for each , then is an abelian processor .given an input and an initial state , we can process one letter at a time to obtain a complete legal execution for .now suppose we are given inputs such that .by lemma [ l.dichotomy ] , any two complete legal executions for satisfy .in particular , , so the final state of does not depend on the order of input .now given , let be the word obtained from by deleting all letters not in . then .for each edge ,since is an abelian processor , so for each the number of letters sent along does not depend on the order of input .for another example of a local - to - global principle , see ( * ? ? ?* lemma 2.6 ) .further local - to - global principles in the case of rotor networks are explored in . in this sectionwe describe a class of optimization problems that abelian networks can solve .let be a finite set and a nondecreasing function : whenever ( inequalities are coordinatewise ) .let be a vector with all coordinates positive , and consider the following problem .let us call a vector _ feasible _ if .if and are both feasible , then their coordinatewise minimum is feasible : therefore if a feasible vector exists then the minimizer is unique and independent of the positive vector : it is simply the coordinatewise minimum of all feasible vectors .let be an abelian network with finite alphabet and finite or infinite state space .fix and , and let be given by where is defined as for any word such that .the function is well - defined and nondecreasing by lemma [ l.monotonicity ] .recall the odometer ] is the unique minimizer of .if does not halt on input , then has no feasible vector . by , any complete execution for satisfies ; and conversely , if then any such that is a complete execution for .if halts on input then the odometer ] is the coordinatewise minimum of all feasible vectors .if does not halt on input , then there does not exist a complete execution for , so there is no feasible vector . for any nondecreasing , there is an abelian network that solves the corresponding optimization problem .its underlying graph is a single vertex with a loop .it has state space , transition function and message passing function satisfying for all and . for the inputwe take and .in general the problem is nonlinear , but in the special case of a _ toppling network _it is equivalent to a linear integer program of the following form . here has all coordinates positive ; is the laplacian matrix ( [ s.toppling ] ) ; and where is the number of chips input at and is the threshold of .the coordinate of the minimizer is the number of times topples . to see the equivalence of and for toppling networks , note that takes the following form for a toppling network : where is the diagonal matrix with diagonal entries , and denotes the coordinatewise greatest integer function . using that is a nonnegative matrix , one checks that is feasible for if and only if is feasible for .we indicate here a few directions for further research on abelian networks .other directions are indicated in the sequels . what does an abelian network `` know '' about its underlying graph ?for instance , chan , church and grochow have shown that a rotor network can detect whether its underlying graph is planar ( with edge orderings respecting the planar embedding ) .theorem [ t.wishlist ] shows that abelian networks can compute asynchronously , and theorem [ t.mip ] gives an example of something they can compute. it would be interesting to explore whether abelian networks can perform computational tasks like shortest path , pagerank , image restoration and belief propagation .in [ s.cellular ] we have emphasized that abelian networks do not rely on shared memory .yet there are quite a few examples of processes with a global abelian property that do .perhaps the simplest is _ sorting by adjacent transpositions _: suppose is the path of length and each vertex has state space .the processors now live on the edges : for each edge the processor acts by swapping the states and if .this example does not fit our definition of abelian network because the processors of edges and share access to the state .indeed , from our list of five goals in this example satisfies items ( a)(c ) only : the final output is always sorted , and the run time does not depend on the execution , but the local run times do depend on the execution . what is the right definition of an abelian network with shared memory ?examples could include the numbers game of mozes , -cores of graphs and hypergraphs , wilson cycle popping and its extension by gorodezky and pak , source reversal and cluster firing .the work of krohn and rhodes led to a detailed study of how the algebraic structure of monoids relates to the computational strength of corresponding classes of automata .it would be highly desirable to develop such a dictionary for classes of automata _networks_. thus one would like to weaken the abelian property and study networks of solvable automata , nilpotent automata , etc .such networks are nondeterministic the output depends on the order of execution so their theory promises to be rather different from that of abelian networks. it could be fruitful to look for networks that exhibit only limited nondeterminism .a concrete example is a sandpile network with annihilating particles and antiparticles , studied by robert cori ( unpublished ) and in under the term `` inverse toppling . ''the authors thank spencer backman , olivier bernardi , deepak dhar , anne fey , sergey fomin , christopher hillar , michael hochman , alexander holroyd , benjamin iriarte , mia minnes , ryan odonnell , david perkinson , james propp , leonardo rolla , farbod shokrieh , allan sly and peter winkler for helpful discussions .sergio caracciolo , guglielmo paoletti and andrea sportiello , multiple and inverse topplings in the abelian sandpile model ._ the european physical journal - special topics _ * 212*(1)2344 , 2012 .http://arxiv.org/abs/1112.3491[arxiv:1112.3491 ] yao - ban chan , jean - franois marckert , and thomas selig , a natural stochastic extension of the sandpile model on a graph , _ j. combin .theory a _ * 120*(7):19131928 , 2013 .http://arxiv.org/abs/1209.2038[arxiv:1209.2038 ] denis chebikin and pavlo pylyavskyy , a family of bijections between -parking functions and spanning trees , _ j. combin .theory a _ * 110*(1):3141 , 2005 .http://arxiv.org/abs/math/0307292[arxiv:math/0307292 ] deepak dhar , studying self - organized criticality with exactly solved models , 1999 .deepak dhar , some results and a conjecture for manna s stochastic sandpile model , _ physica a _ * 270*:6981 , 1999 .http://arxiv.org/abs/cond-mat/9902137[arxiv:cond-mat/9902137 ] ronald dickman , leonardo t. rolla and vladas sidoravicius , activated random walkers : facts , conjectures and challenges , _ j. stat .phys . _ * 138*(1 - 3):126142 , 2010 .http://arxiv.org/abs/0910.2725[arxiv:0910.2725 ] benjamin doerr , tobias friedrich and thomas sauerwald , quasirandom rumor spreading , _ proceedings of the nineteenth annual acm - siam symposium on discrete algorithms ( soda 08 ) _ , pages 773781 , 2008 .http://arxiv.org/abs/1012.5351[arxiv:1012.5351 ] giuliano pezzolo giacaglia , lionel levine , james propp and linda zayas - palmer , local - to - global principles for the hitting sequence of a rotor walk , _ electr. j. combin . _ * 19*:p5 , 2012 .http://arxiv.org/abs/1107.4442[arxiv:1107.4442 ] alexander e. holroyd , sharp metastability threshold for two - dimensional bootstrap percolation , _ probab .theory related fields _ , * 125*(2):195224 , 2003 .http://arxiv.org/abs/math/0206132[arxiv:math/0206132 ] alexander e. holroyd , lionel levine , karola mszros , yuval peres , james propp and david b. wilson , chip - firing and rotor - routing on directed graphs , in _ in and out of equilibrium 2 _ , pages 331364 , progress in probability * 60 * , birkhuser , 2008 .http://arxiv.org/abs/0801.3306[arxiv:0801.3306 ] alexander e. holroyd and james g. propp , rotor walks and markov chains , in _ algorithmic probability and combinatorics _ , american mathematical society , 2010 .http://arxiv.org/abs/0904.4507[arxiv:0904.4507 ]v. b. priezzhev , deepak dhar , abhishek dhar and supriya krishnamurthy , eulerian walkers as a model of self - organised criticality , _ phys .lett . _ * 77*:50795082 , 1996 .http://arxiv.org/abs/cond-mat/9611019[arxiv:cond-mat/9611019 ] alexander postnikov and boris shapiro , trees , parking functions , syzygies , and deformations of monomial ideals ._ trans .soc . _ * 356*(8):31093142 , 2004 .http://arxiv.org/abs/math.co/0301110[arxiv:math.co/0301110 ] jamespropp , random walk and random aggregation , derandomized , 2003 .james propp , discrete analog computing with rotor - routers ._ chaos _ * 20*:037110 , 2010 .http://arxiv.org/abs/1007.2389[arxiv:1007.2389 ] leonardo t. rolla and vladas sidoravicius , absorbing - state phase transition for driven - dissipative stochastic dynamics on , _ inventiones math ._ * 188*(1):127150 , 2012 .http://arxiv.org/abs/0908.1152[arxiv:0908.1152 ] israel a. wagner , michael lindenbaum and alfred m. bruckstein , smell as a computational resource a lesson we can learn from the ant , _ 4th israeli symposium on theory of computing and systems _ , pages 219230 , 1996 .
|
in deepak dhar s model of abelian distributed processors , automata occupy the vertices of a graph and communicate via the edges . we show that two simple axioms ensure that the final output does not depend on the order in which the automata process their inputs . a collection of automata obeying these axioms is called an _ abelian network_. we prove a least action principle for abelian networks . as an application , we show how abelian networks can solve certain linear and nonlinear integer programs asynchronously . in most previously studied abelian networks , the input alphabet of each automaton consists of a single letter ; in contrast , we propose two non - unary examples of abelian networks : _ oil and water _ and _ abelian mobile agents_.
|
motivated by the observation of cell communities over time , we propose a topological expression of the process that facilitates the identification and quantification of its features . in this manuscript , we focus on the mathematical framework . [ [ motivation . ] ] motivation .+ + + + + + + + + + + the specific motivation for the work described in this paper is the experimental data on cell segregation in the developing zebrafish imaged by heisenberg and krens . making cells of two populations observable through fluorescent markers , they follow them through time , assigning each population ( cell type ) its own unique color .in this particular case , the two populations start spatially mixed and end in spatially segregated configurations .the segregation is captured by a series of -dimensional images , which we turn into a shape in space - time .spatial segregation is a special case of the broader class of _ spatial sorting processes _ , in which we are given one or more distinguishable populations of particles ( points in space ) , and we are interested in characterizing their spatial rearrangement in time .we aim at characterizing the spatial sorting process through detailed measurements of its features .the quantification may be used to establish a classification of spatial sorting processes or , on a finer scale , to differentiate between realizations of the same process .a common biological application is the establishment of phenotypes that can help in the classification of genetic influences .once we have a description of the process beyond initial and final states , we may ask more subtle questions , such as whether an observed inverse process has a symmetric characterization . [ [ results . ] ] results .+ + + + + + + + our contributions are primarily mathematical , with the goal of using the insights toward the quantitative analysis of experimental time - series data : 1 .we _ model _ a spatial sorting process as a shape in space - time with descriptive topological properties ; 2 .we _ measure _ this shape using the persistent homology of the time function ; 3 .we _ provide _ a classification of the measurements , interpreting them as aspects of the process .note that measuring the process in space - time is different from taking the trajectory of measurements of the sequence of time - slices .indeed , we will distinguish between _ temporary _ space - time features , that can be observed in slices , from _ fleeting _ features that can not be so observed .the latter require memory and temporal reasoning and are therefore less readily accessible to an observer who lives in time .the main idea of our approach is to turn the time - series of geometric data into a -dimensional topological space whose connectivity is descriptive of the spatial sorting process .the construction takes three steps to produce the measurements : step 1 : : : construct the voronoi tessellation to turn the data in a time - slice into a subset of ; step 2 : : : maintain the construction through time , effectively sweeping out a subset of ] , with .we let be a finite set of trajectories , assuming no two intersect in space - time ; that is : for all in and all ] for the set ,we let ] , we have a finite set of points , , of course taking only the trajectories with .the finite set of points is also colored , with coloring induced by .[ [ voronoi - and - delaunay - medusas . ] ] voronoi and delaunay medusas .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + for each point , we write for its voronoi cell in .the voronoi tessellation at time is denoted as , and the subset of voronoi cells of color is denoted as .collecting voronoi cells in time , we get a -parameter family of cells generated by a trajectory : } { { \ifmmode{\rm vor}\else{\mbox{\(\rm vor\)}}\fi}{({{{\ifmmode{\tt u}\else{\mbox{\(\tt u\)}}\fi}}(t ) } ) } } .\label{eqn : voronoi}\end{aligned}\ ] ] noting that the voronoi cells on the right hand side of lie in distinct parallel copies of , we call a _ stack_. while the voronoi cell in each time - slice is a -dimensional convex polyhedron , the stack itself is neither necessarily convex nor necessarily polyhedral ; see figure [ fig : stacks ] on the left .switching fonts , we write for the set of stacks , each defined by a trajectory in . for each , we write for the subset of stacks generated by trajectories of color .we call the _ multi - chromatic _ and each a _ mono - chromatic voronoi medusa_. similar to stacks of voronoi cells , we also consider stacks of delaunay simplices , which we call _ prisms_. there is an important difference caused by the occasional sudden change of the delaunay triangulation .such a change is a _ flip _ , which either replaces two tetrahedra by three , or three tetrahedra by two ; see e.g. . in -dimensional space - time, a flip appears as a ( degenerate ) -simplex that connects to the preceding delaunay triangulations along two tetrahedra and to the succeeding delaunay triangulations along three tetrahedra , or the other way round .reducing the insertion or deletion of a point to a sequence of flips , as described in ( * ? ? ?* section 5.3 ) , we see that prisms and -simplices suffice to fully describe the history of the delaunay triangulation .we write for the complex in ] . [[ restricted - voronoi - and - alpha - medusas . ] ] restricted voronoi and alpha medusas .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + following the distinction between unrestricted and restricted voronoi cells , we extend the latter to space - time in the obvious way . we thus introduce a _ stack _ of restricted voronoi cells , } { { \ifmmode{\rm res}\else{\mbox{\(\rm res\)}}\fi}{({{{\ifmmode{\tt u}\else{\mbox{\(\tt u\)}}\fi}}(t ) } ) } } , \end{aligned}\ ] ] the set of such stacks , , and the complex of prisms and -simplices swept out by the simplices in the alpha complex , .furthermore , we introduce the colored subsets , , and the colored subcomplexes , , with ] . as before , we simplify the notation by ignoring the difference between a collection of cells and its union .we have , for each color ] , maps every point to its time coordinate , .given a moment in time , ] , and the corresponding _ superlevel set _ is ] .we omit the proof which is similar to that of lemma b. it is important to realize that lemma c extends to all mono - chromatic medusas .[ [ complexes - of - simplices . ] ] complexes of simplices .+ + + + + + + + + + + + + + + + + + + + + + + following the proof of lemma b , we find it convenient to replace the sub- and superlevel sets of the alpha medusa by complexes of simplices .specifically , we write for the complex obtained by contracting all prisms in in time direction .symmetrically , we write for the complex obtained by contracting all prisms in .there is an alternative description .let be a prism in , and write and for the minimum and maximum time values of the points in .we assign these two values as to the simplex in obtained by contracting .then is the subcomplex of simplices with , and is the subcomplex of simplices with .note that and overlap in the simplices that correspond to the prisms with .similar to the sub- and superlevel sets , the complexes of simplices form filtered sequences : again , the transformation does not affect the homotopy type .indeed , the complexes forming the filtered sequences in and are reminiscent of the lower- and upper - star filtrations we find in .in this section , we turn the spaces of section [ sec2 ] into algebraic information .the foundation of the transformation is the classical notion of homology , which we review .the information is summarized in persistence diagrams , which we introduce for modules obtained from sub- and superlevel sets of the time function .homology groups detect and count holes in a single space .we begin with a brief introduction of this classical subject ; see for more information .[ [ homology - groups . ] ] homology groups .+ + + + + + + + + + + + + + + + consider a simplicial complex , perhaps an alpha complex , , which consists of simplices of dimension .fixing a field of coefficients , , we call a formal sum of the form a _-chain _ , in which the are elements of and the are -simplices in , each with a fixed orientation .the _ boundary _ of the -chain is the similarly weighted sum of boundaries of the simplices : , in which is the sum of the -simplices that are its faces .we call a _ -cycle _ if , and we call a _ -boundary _ if there is a -chain with .the chains thus form _ chain groups _ connected by boundary homomorphisms , . similarly ,the cycles form _ cycle groups _ , the kernels of the boundary homomorphisms , , and the boundaries form _ boundary groups _ , the images of the boundary homomorphisms , .since the boundary of a boundary is necessarily zero , we have and we can take the quotient , , which is the _-th homology group_. homology groups can be defined quite generally , for example , as explained above for triangulations of topological spaces .since we choose the coefficients from a field , all groups we mentioned are vector spaces , which are characterized by their ranks . for the -th homology group , the rankis called the _-th betti number _, denoted as . for a space of dimension ,the only possibly non - zero betti numbers are to . in our case, has dimension at most , and we have because every -chain in has non - zero boundary .the remaining three possibly non - zero betti numbers have intuitive interpretations : counts components , counts loops , and counts completely surrounding walls .we get additional intuition by observing that the connectivity of the complement space , , depends on the connectivity of , a relation formalized by alexander duality .we refer to the elements of the homology group of the complement space as the _ holes _ of . distinguishing between the different dimensions , counts _ gaps _ between the components , counts _ tunnels _ passing through the loops , and counts _ voids _ surrounded by walls .we will compute betti numbers for medusas in ] . proof .recall the exact sequences of the two pairs , which we write from left to right and in parallel : by lemma c , the vertical maps between the first , second , fourth , and fifth groups are isomorphisms .the diagram commutes because all maps are induced by inclusion .the steenrod five lemma thus implies that the vertical maps between the middle groups is also an isomorphism .similar to lemma c , lemma d extends to the mono - chromatic case .it is instructive to display the information contained in a persistence module as a finite multi - set of points ( referred to as _ dots _ ) in the plane . after explaining how this is done, we prove that the sequences of homotopy equivalent spaces introduced above give the same diagram .[ [ persistence - diagram . ] ] persistence diagram .+ + + + + + + + + + + + + + + + + + + + as before , we consider the restricted voronoi medusa , , the time function ] . by construction, we have and .applying the homology functor , we get finitely many distinct groups , which we denote as for the restricted and as for the unrestricted voronoi medusa . aligning the two modules, we repeat groups as necessary and arrange them in a commuting diagram : writing for the homomorphism induced by the inclusion , we are interested in the sequence of images : similar to the module of homology groups , is a sequence of vector spaces connected by homomorphisms .we can therefore define births and deaths .we refer to the corresponding multiset of dots as the _ image persistence diagram _ , denoted as ; see for a detailed discussion of this construction and for an algorithm .what does the diagram measure ? in the assumed case ofthe multi - chromatic restricted included in the multi - chromatic unrestricted voronoi medusa , it measures nothing interesting , simply because the groups are not interesting .this is different in the mono - chromatic case . here, we may have a cycle defined by data points of color surrounding points of color different from . in this case, we have a non - trivial class in a ( closed or open ) sublevel set of the restricted voronoi medusa whose image in the corresponding sublevel set of the unrestricted voronoi medusa is still non - trivial .indeed , a cycle in gives rise to a dot in iff it corresponds to a hole formed by points of color different from .if the hole is not formed by such points , then it does not exist in the unrestricted voronoi medusa , the class maps to , and there is no corresponding dot in the image persistence diagram . instead of , we can use the inclusion of in to recognize when holes are caused by interactions between different colors .the algebraic set - up is the same , so we do not need to repeat it . as described in the experimental section [ sec5 ] , the latter inclusion seems to be more effective than the inclusion in the mono - chromatic voronoi medusa .this is perhaps related to the fact that the most interesting is also the most difficult case in this respect , namely that of a -dimensional homology class in .[ [ equivalence.-1 ] ] equivalence .+ + + + + + + + + + + + similar to the conventional case , the image persistence diagrams do not depend on the representation of the space and the time function we use . specifically , we get the same diagrams for the inclusion of in as for the inclusion of in . and are equal as diagrams .proof . arrange the four modules in a -dimensional diagram , in which the -dimensional section at time consists of the ( absolute ) homology groups of the sublevel sets : or of the relative homology groups if the section is taken during the second phase . in all three directions ,the maps are induced by inclusion , so the diagram commutes .this implies that we have maps between the corresponding two modules of images . by lemmas c and d ,the horizontal maps in the -dimensional diagram are isomorphisms , which implies that the maps connecting the two modules of images are isomorphisms .the claimed relation is implied by the persistence equivalence theorem .similar to lemmas c , d , and e , lemma f extends to the mono - chromatic case .we note that the two derived persistence diagrams are best computed using the complexes of simplices , and .indeed , these can be connected by a mapping cylinder , giving a complex of simplices and prisms , not unlike but different from the alpha medusa . from this complex, we get the image persistence diagrams by standard matrix reduction ; see .in this section , we discuss the information contained in the extended persistence module defined by the time function , interpreting the corresponding events in static space - time as well as in dynamic temporal language .we begin with the -dimensional case as a warm - up exercise , but also to facilitate the comparison with the -dimensional case . here , the medusa is embedded in ] with ] , which is therefore more difficult to understand .in contrast , the medusas fix to and in this way facilitate a more compact representation of a subset of the vineyard .this is appropriate for biological cells whose size does not vary substantially with time , and it gives topological information that is easier to interpret and more directly relevant to the object of study .however , we need to keep in mind that with this choice , the persistence diagrams are not stable under the bottleneck distance .the results are particularly sensitive to the radius , which leaves holes if chosen too small and absorbs holes if chosen too big .the work in this paper has motivated the extension of alexander duality from spaces to functions , as proved in .in particular , the euclidean shore theorem in that paper states that the persistence diagram of the time function restricted to the boundary of a voronoi medusa can be obtained from the diagram of the time function on the voronoi medusa .generically , the boundary is a -manifold without boundary , so that the time function can be understood in morse theoretic terms , which is sometimes convenient . in conclusion, we note that the framework introduced in this paper applies to general point processes that unfold in time . the latter model a variety of problemsof which we mention a few : molecules of two fluids mixing after a shock - wave ; microbes forming microfilms ; a flock of birds getting into formation ; two teams competing in a soccer game ; galaxies moving under the influence of gravity .the measurements taken within this framework are significantly coarser than what could be said by studying braids or the loop spaces of configuration spaces needed to detect topological differences for particles moving in dimensions .this is not necessarily a disadvantage since the coarser information is easier to compute as well as to comprehend .
|
we consider the simultaneous movement of finitely many colored points in space , calling it a _ spatial sorting process_. the name suggests a purpose that drives the collection to a configuration of increased or decreased order . mapping such a process to a subset of space - time , we use persistent homology measurements of the time function to characterize the process topologically .
|
in this paper i would like to discuss the general issue of the scope of `` impossibility proof '' of various different kinds , ranging from the classical straightedge / compass construction to hidden - variables and bit commitment in quantum physics .the aim is to highlight the difficulties in characterizing all the possibilities that can be obtained in principle in the real world , which are purportedly ruled out by the impossibility proofs .the issue is of fundamental significance for understanding the relation between real world features and their mathematical representations , a subject of great importance and subtlety in general .more specifically , i would also point out the non - existence of a universal quantum bit commitment ( qbc ) impossibility proof .the presumed existence of such a proof is widely held , which sociologically and psychologically closes out a field that is rare in the area of quantum information in its potentially significant and realistic applications . a new way for obtaining secure qbc protocolswould also be indicated .consider the following list of impossibility ( non - existence ) claims , in chronological order of their first appearance , which are supported by `` proofs '' of various sorts to be analyzed .a. no trisection of an arbitrary angle with straight edge and compass : + + it is not possible to construct in a finite number of steps an angle equal to one - third of an arbitrarily given angle using only a compass and an unmarked straight edge. b. church - turing thesis : + + it is not possible to find a mechanical procedure that can not be implemented by a turing machine . c. no - hidden variable theorem of von neumann : + + there is no hidden variable theory that would reproduce the predictions of quantum mechanics .d. no - clone theorem : + + there is no physical system that would produce at the output two identical copies of an input quantum state drawn from a set of two nonorthogonal states .e. impossibility of quantum bit commitment : + + there is no qbc protocol that is arbitrarily close to being perfectly secure for both parties .what is remarkable about an impossibility theorem of the above variety is that it rules out not just mathematical possibilities but physical realistic ones .this implies that all the relevant physical possibilities in the real world for the problem situation under consideration have been taken into account in the mathematical formulation .even given a physical theory describing the real world such as quantum physics , this may or may not be possible depending on whether one can characterize mathematically , or at least include in a mathematical description , _ all _ the apparent possibilities .this is because generally we know of no mathematical characterization of all the possible real world referents of a given clearly meaningful natural language sentence .we illustrate this point with ( i ) , perhaps the oldest impossibility claim .how do we characterize all possible straight edge / compass constructions so we may prove , say , it is a logical contradiction if any one of them can be used to trisect an arbitrary angle .indeed , from the standard impossibility proof as obtained from galois theory , that is not possible even for some specific angles , say . on the other hand , it is well - known that such construction is possible if the straight edge is marked , or if , and so on . a simple such construction is given by archimedes , the greatest ancient greek mathematician according to many , which works as illustrated in fig . 1. how does one exclude it in the description `` compass and unmarked straight edge '' ? given a compass ,i contend that there is no need to mark a straight edge in order to get an effectively marked one as follows .one sets the basic unit measure with the compass , e.g. , from of fig . 1 , and then flips it along the straight edge to get any integer multiple . for the archimedean construction , one could slide the compass pointer along the straight line and see when the pencil would cut the circle .i do not wish to go into zeno s paradoxes here , but the above seems to be a real world construction one can carry out with a compass and straight edge .of course such operations are not allowed in the intended description of `` compass and unmarked straight edge '' .a much less misleading description of the allowed operations can be given in this case with a more precise specification indeed a perfectly precise one is given algebraically which is to be translated to the `` compass and straight edge '' language in some way .my point here is that the algebraic restriction is precise but the corresponding `` compass and straight edge '' description is not , certainly not for their possible operations in the real world .there are manifold problems for such precise translation , which i think lie at the foundation of much human knowledge with little known results .my conclusion on this case ( i ) is that there is no mathematical characterization of all possible `` compass and unmarked straight edge '' operations in the real world .the impossibility proof works for a very specific subset of such operations which are taken , perhaps appropriately when described in sufficient detail , to correspond exactly to a set of algebraic operations .the significance and difficulty of characterizing all possible real world operations , even just in classical physics , is illustrated by case ( ii ) .it is taken to be an empirical fact that a turing machine , or any of its equivalents such as a post machine , can simulate any mechanical procedure .the claim is rightly called a thesis , _ not _ a theorem , not only because it is unproven , but because one can not have a theorem on something in the real world in this case , a mechanical procedure which does not correspond to a primitive of the mathematical system and also has no mathematical definition in terms of the primitives .this lack of a mathematical definition for `` mechanical procedure '' does not prevent the thesis from being meaningful in the real world as long as one can recognize some , maybe a lot but all is not required procedures to be mechanical .however , it may be remarked that zeno s paradoxes and corresponding `` infinite machines '' can be construed as contrary claims to the church - turing thesis , precisely because `` mechanical procedure '' is not a precisely characterized concept . case ( iii ) on von neumann s no - hidden variable theorem provides a clear illustration of the problem of how one may characterize mathematically all systems or processes of a certain kind , in this case a `` hidden - variable theory '' .i would not discuss here the now well known suspension of von neumann s fifth linearity requirement of a hidden variable theory , which is deemed unnecessary for both local and nonlocal such theories .a deeper problem still exists on what one means by a `` local theory '' , which is exemplified by the recent controversy on the non - existence of local hidden - variable theory supposedly given by the `` bell theorem '' .the no - clone theorem of case ( iv ) is indeed a theorem concerning all processes described by quantum physics , because all such processes can be included in a proper mathematical representation . the usual proof contains the basic ingredients of a complete proof that can be spelled out readily .for the rest of this article i will talk about case ( v ) on qbc ._ at best _ a general qbc impossibility claim would have the status of a thesis as in case ( ii ) for turing machines as ozawa emphasized , for the similar reason that no one knows the mathematical characterization of all qbc protocols , or indeed of all protocols of any kind , classical or quantum .as in the case of a mechanical procedure , one can typically recognize a qbc protocol when presented one although one can not include them all in a mathematical formulation . in this connection , it may be observed that it is possible to characterize all attacks on a given protocol of any kind , although that is already a somewhat subtle issue the relation between reality and its mathematical representation is almost always a subtle issue from the vantage point of present human knowledge .note also that if there is a problem of characterizing all attacks on a protocol , it would not be possible to give unconditional security proofs such as those claimed for quantum key distribution .despite this , it is widely accepted that there is a universal impossibility _ proof _ on secure qbc since 1997 , and the recent paper adds to this impression despite its ambivalence on this point .the implicit claim has always been that _ all _ possible qbc protocols have been included in the mathematical formulation of the problem given in the paper .the fact that new features were introduced that were not covered in previous formulations has always been ignored , as long as the feature has not led to a widely accepted secure protocol which is the case thus far .the general impossibility claim is repeated with the insistence that there is a _ proof _ for it. there have been contrary claims from time to time appearing in the quant - ph archive that secure qbc is possible with specific protocols and security proof sketches .unfortunately , it is notoriously difficult to understand someone else s qbc protocol and security proof .thus , i confine myself in the following to only the _ new _ features i myself introduced in protocols of various sorts that have not been covered in _ previous _ impossibility proofs. the discussion would be just in words and hopefully understandable to anyone somewhat knowledgeable on the issue as in cases ( i)-(iv ) above .the intention is to lay out the issues in a way that preliminary assessment can be made without going into technical details .it is important to re - emphasize that there are _ two distinct issues _ here .one is whether there is a universal impossibility proof and the other is whether there is a qbc protocol that has been proved secure .the very fact that some features were not covered in previous proofs shows that there was _ no valid basis _ to claim the existence of a universal impossibility proof , regardless of whether any secure protocol has been found . in the following, i will discuss a specific new feature not covered in any of the impossibility proofs and which leads to secure protocols .broadly speaking , i have given three different new ways to obtain a possibly secure protocol since 2000 at the qcmc in capri , italy .at least two of these were not covered by any impossibility proof known to anyone at the time they were first discussed , in print or in saying .they are:- 1 .use of anonymous states : + a party uses classical random numbers in the protocol not known to the other party .2 . action from b to prevent entanglement cheating from a : + the idea is for b to destroy a s ( cheating ) entanglement by a s action .testing before commitment or opening : + the idea is to destroy a or b s own ( cheating ) entanglement by demanding answers which force measurement on his / her own ancilla . in case ( 1 ) with anonymous states , much initial reaction was that the classical randomness can be purified quantum mechanically and the resulting pure state is assumed openly known .this is clearly not a realistic assumption , and does not obtain under what i call the secrecy principle for qbc protocols : whatever a party does is not required to be revealed to the other party if it does not permit the first party to cheat .clearly , such use of classical random numbers must be allowed as it is in qkd protocols .it has nothing to do with kerckhoffs principle as alleged in . in this connection, it may be noted that something must be kept private in any kind of secure protocol . in the qbc case, it is not realistic to assume that a party knows the randomness purification basis of the other party .a theorem based on such an assumption has no real world application .it turns out it appears impossible to get a secure protocol this way . for perfectconcealing , a general proof of this impossibility was given in for a two - round protocol . a different argument applicable to multi - round protocolswas given by ozawa and later by cheung .for approximate concealing , a simple proof covering all natural protocols was given by me in a slide prepared for the 2005 qit meeting in sendai , japan .more complicated versions are also given in and . in situation( 2 ) , the general futility of b s action follows from the simple commutativity of a and b s actions when there is no checking intervention in between . while it can reasonably be maintained that such possibility is implicitly covered by the known impossibility proofs due to its technical simplicity , it is conceptually far from trivial and should be indicated in a general impossibility proof , especially one that goes not by deriving a general contradiction but by examining each party s possible actions . in this connection, it may be observed there was really no place where one can find a systematic qbc impossibility proof in the literature other than vague generalities , till the appearance of .this is indicative of the state of the field .unfortunately , the formulation and proof in is not given or translated into the usual language , and thus is very difficult to understand and to assess .it is not even clear whether a universal impossibility proof is claimed .the testing possibilities for case ( 3 ) are clearly legitimate protocol elements similar ones are used in qkd protocols .they were simply not covered in the impossibility proofs , not completely even in .it was found that entanglement cheating may be retained for some such proposed protocols via proper l measurement on part of the ancilla .indeed , it turns out that if a party has the choice of many possible entanglements that yield the same mixed result for the other party , such as a full permutation entanglement for an unknown bit position , there is no known way to create a secure protocol this way . on the other hand ,if the party is limited to use a _ specific entanglement _ , it can be destroyed by his / her own measurement upon answering truthfully to questions .this possibility is explicitly described in as my protocol qbc3 .it works in a similar way in a different setting as my protocol qbc1 .how does one party know a specific entanglement has actually been carried out instead of some other action by the other party ? in the qbc literature , the claim has always been that even under _ honest entanglement _ with all classical randomness purified , no protocol can be secure .such claim is made without allowance for checking .there was no discussion on even what would happen if one is caught cheating during protocol execution .the usual implicit assumption is that there is no need for anyone to cheat during testing .nevertheless , it is important to note that whether a party is using the prescribed entanglement _ can _ indeed be checked before commitment in many protocols including my qbc1 and qbc3 .in contrast , this can not be accomplished in a single - pass protocol to prevent a s entanglement cheating .intuitively , a party s entanglement cheating possibility may be destroyed when he / she has to measure on his / her ancilla to provide correct answers during testing .if the entanglement is sparse , the measurement would totally disentangle the corresponding bit sequence but it _ still appears mixed _ to the other party in the form of classical randomness with no quantum purification .the situation is similar to the case of one - pass protocols in which a picks a state randomly without actual entanglement .the protocol is concealing but a can not cheat anyway . on the other hand , there can be no checking in one - pass protocols so a can not be assumed to be `` honest '' and thus not entangle the choices .this subtle possibility in my qbc1 and qbc3 was overlooked .the multiple possible entanglements all work the same if there is no testing later .in fact , the one with minimal ancilla dimension is suggested in .thus , what is missed in is the possibility that a specific entanglement is _ dictated _ by the protocol which upon testing can be destroyed to prevent further cheating .the situation here is in a way similar to case ( i ) on angle trisection in the difficulty of covering all real world possibilities .it is similar to case ( iii ) on no hidden - variables in that only a specific formulation is covered that does not exhaust all possible qbc protocols one can construct .it may be pointed out that one can not dismiss the above secure possibility by claiming there is no ( approximate ) concealing proof for protocols qbc1 and qbc3 .but even if there is none , the possibility is not covered by known impossibility proofs because such multiple entanglement possibility for the same classical randomness is not or has not been related to concealing .here is an example of the two distinct issues mentioned above .secure bit commmitment is an extraordinarily useful tool that may be used to perform many tasks .the area of quantum information would be enormously enriched if qbc is included in its development .note that the entanglement cheating needed against a qbc protocol is not even on the experimental horizon and may never be .indeed , many qbc protocols are more _ practically secure _ than any known qkd protocol and proper quantitative security comparison between them deserves to be made .that fully secure qbc protocols can not be obtained as claimed by the qbc impossibility proofs constitutes a sociological and psychological barrier to the development of qbc .it is a difficult task to overcome this widely held perception , which i hope to begin with this paper .one should be able to tell , independently of my own specific protocols , that secure quantum bit commitment is far from a settled issue .i would like to thank c.y .cheung , g.m .dariano , r. nair and m. ozawa for many useful discussions .10 url # 1`#1`urlprefix see , e.g. , b. bold , _ famous problems of geometry and how to solve them _ , dover publications , 1969 .see , e.g. , _zeno s paradoxes _ , ed . by w.c .salmon , bobbs - merrill co. , new york , 1970 .see , e.g. , j.s .bell , rev ._ 38 _ , 447 ( 1966 ) . j. christian , quant - ph/0703244 v7 ( 2007 ) .yuen , phys .113 _ , 405 ( 1986 ) .d. mayers , phys ._ 78 _ , 3414 ( 1997 ) . h.k .lo and h.f .chau , phys ._ 78 _ , 3410(1997 ) .dariano , d. kretschmann , d. schlingemann , and r.f .werner , phys rev a _ 65 _ , 012310 ( 2001 ) .yuen , in _ quantum communication , measurement and computing _ , ed . by j.h .shapiro and o. hirota , rinton press , 371 ( 2003 ) ; also quant - ph/0210206 ( 2002 ) .yuen , quant - ph/0109055 ( 2001 ) .m. ozawa , private commmunication , oct 2001 .cheung , quant - ph/0508180 ( 2005 ) .cheung , quant - ph/0601206 ( 2006 ) .yuen , in _ quantum communication , measurement and computing _ , ed . by o.hirota , j.h .shapiro and m. sasaki , nict press , 249 ( 2007 ) ; also quant - ph/0702058 v3 ( 2007 ) .yuen , quant - ph/0305144 ( 2003 ) .g. brassard , c. crepeau , d. mayers and l. salvail , quant - ph/9712023 ( 1997 ) .spekkens and t. rudolph , phys .a _ 65 _ , 012310 ( 2001 ) .in fig . 1 , let be a given angle . construct a circle with as center and any length as radius . construct a line through intersecting diameter extended so that is equal to the radius .the angle is one third of angle .
|
the nature and scope of various impossibility proofs as they relate to real - world situations are discussed . in particular , it is shown in words without technical symbols how secure quantum bit commitment protocols may be obtained with testing that exploits the multiple possibilities of cheating entanglement formation .
|
we answer in the affirmative a conjecture posed by hegselmann and krause about the long - term behavior of opinions in a finite group of individuals , some of them attracted to the truth , the so - called _ truth seekers_. our contribution : under mild assumptions , the opinions of all truth seekers converge to the truth , despite being distracted by individuals not attracted to the truth , the _ignorants_. the underlying model for opinion dynamics is the _ bounded - confidence model _ : opinions , which themselves are represented by real numbers in the unit interval , are influenced by the opinions of others by means of averaging , but only if not too far away .this bounded - confidence model ( formal definitions below ) was first suggested by krause in 1997 .it received a considerable amount of attention in the artificial societies and social simulation community .the concept of truth seekers was invented in 2006 by hegselmann and krause , along with a philosophical discussion about the scientific context with respect to the notion of truth .we blind out the philosophical discussions here and focus on the resulting dynamical system , governed by difference equations that we find interesting in their own right .the opinions of _ truth seekers _ are not only attracted by opinions of others ; they are additionally attracted by a constant number , the truth .the resulting opinion is weighted average of the result of the original bounded - confidence dynamics and the truth .individuals not attracted by the truth in this sense are _ignorants_. in their paper , hegselmann and krause show that if all individuals are truth seekers no matter how small the weight , then ( the opinions of ) all the individuals converge to consensus on the truth value .the question we answer in this paper arises when some of the individuals are ignorants , i.e. , the weight of the influence of the truth is zero for them .numerous simulation experiments led hegselmann and krause to the conjecture , that still the opinions of all the truth seekers finally end up at the truth . however , a proof of this fact could not be found so far .evidence by simulation only , however , bears the risk of numerical artefacts very much so in the non - continuous bounded - confidence model .therefore , it is desirable to provide mathematically rigid proofs of structural properties of bounded - confidence dynamics .allthough the conjecture may seem self - understood at first glance because of the contraction property of the system dynamics for truth seekers , a second look on the situation reveals that the conjecture and its confirmation in this paper are far from trivial : several innocent - looking generalizations of the conjecture are actually false , as we will show below in the technical parts of the paper . relying on intuition only is dangerous . even in the affirmative cases ,convergence turns out to be quite slow in general and far from monotone .the main difficulty is the following : the convergence of truth seekers heavily depends on their long - term influence on ignorants . depending on the configuration of ignorants and the parameters of the system , there are arbitrarily many iterations in which the truth seekers deviate from the truth .the crucial observation is that , during these iterations , the configuration of ignorants is somehow `` improved '' because the truth seekers attract them .after all , the proof is elementary but extremely technical .we introduce some structures like the _ confidence graph _ , that might prove useful also in other contexts .other structures we need are rather special , probably with limited use beyond this paper .it would , therefore , be desirable to find a more elegant proof , revealing the reason why the conjecture is true .for example : find a suitable lyapunov function .the examples we give as we go along in the proof , however , indicate that a certain amount of complexity has to be captured by the arguments because the line between true and false conjectures is extremely thin .suppose there is a set :=\{1,\dots , n\} ] at time for all ] .the opinion of an individual ] and a parameter we define the _ confidence set of value _ _ at time _ as \mid |x - x_j(t)|\le \varepsilon\}.\ ] ] as a shorthand we define for any ] ist the _ truth _ , * ] is a lower bound for the weight of the truth for truth seekers , * ] or for all is the actual weight of the truth for truth seeker at time step , * ] and for all is the weight of opinion in the view of agent , * ] ._ truth seekers _ are members of the set \mid\alpha_k(t)\ge \alpha\ , \forall t\in\mathbb{n}\} ] called the _hope interval _ which is crucial for our further investigations . to prove the main theorem we will show that the length of this hope interval converges to zero . in figure [ fig_def_borders ] , we have depicted a configuration to illustrate definition [ def_lower_upper ] and definition [ def_extreme_individuals ] . in particular , we have , , , and . individual is _ lost _ and not contained in the hope interval , because there is no path in from to .so we already know that the opinion of individual will not converge to the truth .( 6,3.0 ) ( -1,1 ) ( 1.7,1 ) ( 2.3,1 ) ( 3,1 ) ( 3.35,1 ) ( 3.7,1 ) ( 4,1 ) ( 4.5,1 ) ( 5,1 ) ( 5,1.4 ) ( 5.4,1 ) ( 5.8,1 ) ( -1.1,1.7) ( -1.1,2.1) ( 1.6,1.7) ( 1.6,2.1) ( 2.2,1.7) ( 2.2,2.1) ( 2.9,1.7) ( 2.9,2.1) ( 3.25,1.7) ( 3.25,2.1) ( 3.6,1.7) ( 3.6,2.1) ( 3.9,1.7) ( 3.9,2.1) ( 4.4,1.7) ( 4.4,2.1) ( 4.9,1.7) ( 4.9,2.1) ( 4.75,2.5) ( 5.3,1.7) ( 5.15,2.1) ( 5.7,1.7) ( 5.65,2.1) ( -1,0.5)(1,0)8 ( 0,0)(1,0)8(-1,0.4)(0,1)0.2 ( -1.2,0) ( -0.2,0) ( 0.8,0) ( 1.8,0) ( 4.25,-0.2)(0,1)1.5 ( 4.15,1.7) ( 4.13,2.1) ( 2.8,0) ( 3.8,0) ( 4.8,0) ( 5.8,0) in the configuration depicted in figure [ fig_def_borders_2 ] we have , , , and . ( 6,3.0 ) ( -1,1 ) ( 1.7,1 ) ( 4.5,1 ) ( 5.4,1 ) ( 5.8,1 ) ( -1.1,1.7) ( -1.1,2.1) ( 1.6,1.7) ( 1.6,2.1) ( 4.4,1.7) ( 4.4,2.1) ( 5.3,1.7) ( 5.31,2.1) ( 5.7,1.7) ( 5.71,2.1) ( -1,0.5)(1,0)8 ( 0,0)(1,0)8(-1,0.4)(0,1)0.2 ( -1.2,0) ( -0.2,0) ( 0.8,0) ( 1.8,0) ( 4.25,-0.2)(0,1)1.5 ( 4.15,1.7) ( 4.13,2.1) ( 2.8,0) ( 3.8,0) ( 4.8,0) ( 5.8,0) note that the weights may be assymmetric .thus , the sequence of the opinions of the individuals may reorder during the time steps . as an example , consider , e.g. , three ignorants with starting positions , , and .the weights may be given as , , , , , , , , and .after one time step the new opinions are given by , , and .we remark that it is possible to achieve every ordering of the three opinions in one time step by choosing suitable weights in this example .nevertheless we have the following straight - forward lemma : [ lemma_interval_1 ] let be an ignorant , be an individual with smallest opinion and be an individual with largest opinion then we have ] and for we have ] for all .we remark that , by definition , does not contain a truth seeker .we set \backslash\mathcal{l}(t') ] is contained in the interval ] .for the evaluation of equation ( [ eq_update ] ) for elements of , , or we do not need to consider the opinion of individuals in \backslash(\mathcal{n}(t)\cup\mathcal{m}(t)\cup\mathcal{f}(t))$ ] .let be an element of with opinion , where .let us first assume that is an ignorant .due to individual we have for a truth seeker we similarly get now let be an element of with where . in any case ( being a truth seeker or an ignorant ) we have if then after at least time steps we have a good iteration . due to lemma [ lemma_middle ] we can assume .we can also assume since otherwise we have a good iteration in at most time steps .at first we claim .if at time there is a truth seeker then we have so the only truth seekers that have a chance to move into the set could be those of the set .so let truth seeker be in the set , with , where .( truth seekers where are ruled out by lemma [ lemma_interval_2 ] . )we have similarly , we can deduce .now we can assume that the individuals of , who are all ignorants , are in the hope interval at time , since otherwise we would have a good iteration after time step .so there exist individuals and with .we set , where and calculate for the other direction we have thus , , which results in a good iteration in three time steps .thus , we can conclude : after a finite number of steps we have and .due to lemma [ lemma_lonely ] there can not exist a general bound on the convergence that does not depend on .we consider the two side lengths and of the hope interval .clearly and are not increasing due to lemma [ lemma_decreasing ] .for we have [ lemma_epsilon_interval ] if then we have let us assume , without loss of generality , that .at first we consider the case .if is an ignorant with then we have for a truth seeker with we have similarly we obtain in both cases . next we consider the case and .let be an arbitrary truth seeker with opinion .we have thus , we have . if is an ignorant with , then we have for an arbitrary truth seeker we have thus , in all cases we have .this states that once the length of the hope interval becomes at most its lengths converges to zero .[ lemma_one_step ] let .if there exists an individual with , then we have .if there exists an individual with , then we have .due to symmetry it suffices to prove the first statement .let be an ignorant with , where .we have for a truth seeker with , we have for transparency we introduce the following six sets : \mid d(\hat{l}(t),i , t ) < \frac{\alpha\beta \ell_1(t)}{12}\right\},\\ \mathcal{n}_2(t):&=&\left\{i\in[n]\mid d(\hat{u}(t),i , t ) < \frac{\alpha\beta \ell_2(t)}{12}\right\},\\ \mathcal{m}_1(t):&=&\left\{i\in[n]\mid \frac{\alpha\beta \ell_1(t)}{12}\le d(\hat{l}(t),i , t)\le\varepsilon\right\},\\ \mathcal{m}_2(t):&=&\left\{i\in[n]\mid \frac{\alpha\beta \ell_2(t)}{12}\le d(\hat{u}(t),i , t)\le\varepsilon\right\},\\ \mathcal{f}_1(t):&=&\left\{i\in[n]\mid d(\hat{l}(t),i , t)>\varepsilon,\,x_i(t)\le h+\ell_2(t)\right\},\\ \mathcal{f}_2(t):&=&\left\{i\in[n]\mid d(\hat{u}(t),i , t)>\varepsilon,\,x_i(t)\ge h-\ell_1(t)\right\}.\\\end{aligned}\ ] ] with this the individuals of the hope interval are partitioned into [ lemma_two_step ] if for and there exists an ignorant and an individual with then . if , then it is easy to check that the influence of individual suffices to put ignorant in set . in this casewe can apply lemma [ lemma_one_step ] [ lemma_no_near_truth_seeker ] if and then for .due to symmetry it suffices to consider .so let be a truth seeker with .we set and calculate [ lemma_one_shrinks ] for we have for at least one .due to lemma [ lemma_no_near_truth_seeker ] we can assume . at time there must be a truthseeker . without loss of generality, we assume and . due to lemma [ lemma_one_step ]we can assume .now let be the ignorant with smallest opinion fulfilling .if then we can apply lemma [ lemma_two_step ] with and .otherwise we let be the ignorant with smallest opinion fulfilling .so we have and .thus , we can apply lemma [ lemma_two_step ] with and .[ lemma_greater_epsilon ] if and then we have or . due to lemma [ lemma_no_near_truth_seeker ], we can assume and , due to lemma [ lemma_one_step ] , we can assume . due to symmetry , we only consider the case .let the ignorant with largest opinion , meaning that is maximal .if there exists an individual with , then we can apply lemma [ lemma_two_step ] . if no such individual exists then we must have or .so only the first case remains .we set .let be an ignorant with , where . for time we get from lemma [ lemma_one_shrinks ] and lemma [ lemma_greater_epsilon ] we conclude : [ lemma_everything_small ] there exists a finite number so that we have for all .we would like to remark that , e.g. , suffices .[ lemma_final ] for each we have or for all truth seekers .without loss of generality , we assume and prove the statement by induction on .due to lemma [ lemma_one_step ] and lemma [ lemma_no_near_truth_seeker ] , we can assume for since otherwise we would have .thus , we have for all and . due to lemma [ lemma_one_step ] for ,the individuals in are not influenced by the individuals in since otherwise we would have .thus , we can apply lemma [ lemma_epsilon_interval ] for the individuals in . from the previous lemmas we can conclude theorem [ main_result ] and theorem [ thm_interupted_convergent ] .( proof of theorems [ main_result ] and [ thm_interupted_convergent ] . ) after a finite time we are in a _nice _ situation as described in lemma [ lemma_everything_small ] .if we have then we have an ordinary convergence of the truth seekers being described in lemma [ lemma_epsilon_interval ] . otherwise we have for all truth seekers .due to lemma [ lemma_final ] and lemma [ lemma_epsilon_interval ] either we have for all truth seekers and all , or there exists an , such that we have 1 . for all , 2 . for all , for all .the latter case is -fold interrupted convergence .thus , the hegselmann - krause conjecture is proven .in this section we would like to generalize the hegselmann - krause conjecture and show up which requirements can not be weakened .infinitely many ignorants can clearly hinder a truth seeker in converging to the truth .if the confidence intervals are not symmetric then it is easy to design a situation where some ignorants are influencing a truth seeker which does not influence the ignorants , so that the truth seeker has no chance to converge to the truth .if we only require , then we have the following example : , , , , , , , , , and . by a straight forward calculationwe find that for .we remark that conditions like would also not force a convergence of the truth seekers in general .one might consider an example consisting of two ignorants with starting positions and a truth seeker with starting position .we may choose suitable and so that we have for all , for even and for odd .given , , , , we say that the truth seekers are -fold interrupted convergent , if for each there exists functions , , so that for each ( wasbocos ) with structural parameters , , and there exist , satisfying :\ , |x_k(t)-h|<\gamma\ ] ] for , where , and at first we remark that it clearly suffices to have for all only for all , where is a fix integer .we assume and consider the following example : , , , , , for the truth seekers , and for the ignorants until we say otherwise .let there be a given being sufficiently small .there exists a time until . up to this time no other individual has changed its opinion .after time we suitably choose so that we have .so at time the convergence of truth seeker is interrupted the first time .after that we may arrange it that and get an equal opinion and will never differ in there opinion in the future .now there exists a time until and we may apply our construction described above again .thus , every ignorant may cause an interruption of the convergence of the truth seekers .the hegselmann - krause conjecture might be generalized to opinions in instead of when we use a norm instead of in the definition of the update formula .using our approach to prove this -dimensional conjecture would become very technical , so new ideas and tools are needed .we give an even stronger conjecture : the -dimensional generalized hegselmann - krause conjecture holds and there exists a function so that the truth seekers in an arbitrary generalized ( wasbocos ) are -fold interrupted convergent in , , , and .r. hegselmann and u. krause , _ truth and cognitive division of labour : first steps towards a computer aided social epistemology _ , journal of artificial societies and social simulation * 9 * ( 2006 ) , no . 3 .
|
we give an elementary proof of a conjecture by hegselmann and krause in opinion dynamics , concerning a symmetric bounded confidence interval model : if there is a truth and all individuals take each other seriously by a positive amount bounded away from zero , then all truth seekers will converge to the truth . here truth seekers are the individuals which are attracted by the truth by a positive amount . in the absence of truth seekers it was already shown by hegselmann and krause that the opinions of the individuals converge . ' '' '' ' '' ''
|
direct observations of sunspot numbers over 400 years , as well as proxy data for much longer timescales , show that both the amplitude and the duration of the solar magnetic cycle vary from one cycle to the next .the importance of this phenomenon lies in the contribution of varying levels of solar activity to long - term climate change , and to short - term space weather .while there is now a concensus that the sun s magnetic field is generated by a hydromagnetic dynamo , the origin of fluctuations in the basic cycle is yet to be conclusively determined .several different mechanisms have been proposed , including nonlinear effects , stochastic forcing , and time - delay dynamics .a coupled , equally important , but ill - understood issue is how the memory of these fluctuations , whatever may be its origin , carries over from one cycle to another mediated via flux transport processes within the solar convection zone ( scz ) .a unified understanding of all these disparate processes lays the physical foundation for the predictability ( or lack - thereof ) of future solar activity .these considerations motivate the current study .the main flux transport processes in the scz involve magnetic buoyancy ( timescale on the order of months ) , meridional circulation , diffusion and downward flux - pumping ( timescales relatively larger ) .because magnetic buoyancy , i.e. , the buoyant rise of magnetic flux tubes , acts on timescales much shorter than the solar cycle timescale , the fluctuations that it produces are also short - lived in comparison .our focus here is on longer - term fluctuations , on the order of the solar cycle period , that may lead to predictive capabilities . through an analysis of observational data , have shown that the solar cycle amplitude and duration are correlated with the equatorward drift velocity of the sunspot belts during the cycle .they associate this drift velocity with the deep meridional counterflow that must exist to balance the poleward flows that are observed at the surface ( , ; ) .the results show a significant negative correlation between the drift velocity and the cycle duration , so that the drift is faster in shorter cycles , consistent with the interpretation of meridional circulation as the timekeeper of the solar cycle ( ; but see also ) .in addition identified positive correlations between the drift velocity of cycle and the amplitudes of both cycles and .while the two - cycle time lag was a new result , the positive correlation between circulation speed and amplitude of the same cycle is supported by several earlier studies . in their surface flux transport model , needed a varying meridional flow , faster in higher - amplitude cycles , to sustain regular reversals in the sun s polar field .they cited observational evidence from polar faculae counts , which peaked early for two of the stronger cycles , coinciding with poleward surges of magnetic flux .furthermore , observations show a statistically - significant negative correlation between peak sunspot number and the duration of cycles 1 to 22 ( figure 1c of ; see also ) .such a negative correlation between cycle amplitude and duration is also found in the models of and . taken with the inverse relation between cycle duration and circulation speed ,this is again suggestive of a positive correlation between circulation speed and cycle amplitude .meridional circulation plays an important role in a certain class of theoretical solar cycle models often referred to as `` flux - transport '' , `` advection - dominated , '' or `` circulation - dominated '' dynamo models ( see , e.g. , the review by nandy 2004 ) .such models have gained popularity in recent years owing to their success in reproducing various observed features of the solar cycle . in these models , a single - cell meridional circulation in each hemisphere ( which is observed at the solar surface ) is invoked to transport poloidal field , first poleward at near - surface layers and then down to the tachocline where toroidal field is generated .subsequently , the return flow in the circulation advects this toroidal field belt equatorward through a region at the base of the scz which is characterized by low diffusivity . from this deep toroidal field belt ,destabilized flux tubes rise to the surface due to magnetic buoyancy , producing sunspots .we may point out here that the name `` flux - transport dynamo '' is somewhat inappropriate to classify a circulation or advection - dominated dynamo ( where the diffusion timescale is much larger than the circulation timescale throughout the dynamo domain ) .our results indicate that diffusive flux - transport in the scz could play a dominant role in dynamos even when the cycle period is governed by meridional circulation speed , pointing out that flux - transport is a shared process .so , henceforth , by `` flux - transport '' dynamo , we imply a dynamo where the transport of magnetic field is shared by magnetic buoyancy , meridional circulation , and diffusion .flux - transport dynamos offer the possibility of prediction because of their inherent memory .this arises specifically when the dynamo source regions for poloidal field production ( the traditional -effect ) and toroidal field generation ( the -effect ) are spatially segregated .a brief discussion on important timescales ( we identify three significant ones ) in the dynamo process is merited here .the first is governed by the buoyant rise of toroidal flux tubes from the -effect layer to the -effect layer to generate the poloidal field ; since this is a fast process on the order of months , no significant memory is introduced here .the second involves the transport of poloidal field back into the -effect layer ( either by circulation or diffusion ) .this could be a slow process where significant memory is introduced which is dominated by the fastest of the competing processes ( advection versus diffusion ) .the third timescale relates to the slow equatorward transport of the toroidal field belt through the base of the scz , which sets the period of the sunspot cycle . in this class of dynamo models , with meridional circulation and low diffusivities in the tachocline ( at the base of the scz ) ,the third timescale is almost invariably determined by the circulation speed .it is the second timescale above , with competing effects of diffusive flux transport and advective flux transport , that becomes important in the context of the persistence of memory . inthe advection - dominated , stochastically fluctuating model of , this second timescale ( governed by advection of poloidal field due to meridional circulation ) was about 17 years , so that the polar field at the end of cycle correlated strongest with the toroidal field of cycle rather than that of cycle .the length of memory of any particular flux - transport dynamo model is unfortunately dependent on the internal meridional flow profile , and on other chosen properties of the convection zone which are not yet well - determined observationally .a particular problem is the strength of diffusivity in the convection zone , which strongly affects the mode of operation of the dynamo . even if one assumes that these flux - transport dynamos capture enough of the realistic physics of the scz to make predictions of future solar activity , these predictions are critically dependent on the relative role of diffusion and advection in the scz . , in their highly _ advection - dominated _ model , show that bands of latitudinal field from three previous cycles remain `` lined up in the meridional circulation conveyor belt '' .they suggest that poloidal fields from cycles , , and combine to produce the toroidal field of cycle .based on an assumed proxy for the solar poloidal fields ( sunspot area ) , this leads them to predict that cycle 24 will be about stronger than cycle 23 . in stark contrast , , using a flux - transport dynamo model with _ diffusion - dominated _ scz , and using as inputs the observed strength of the solar dipole moment ( as a proxy for the poloidal field ) , predict that cycle 24 will be about _ weaker _ than cycle 23 . argue that the main contribution to the toroidal field of cycle , comes only from the polar field of cycle ( see also for further details of this model ) .the conflicting predictions from these two solar dynamo models presumably result from the difference in the memory ( i.e. , survival ) of past cycle fields in these models and could be to some extent influenced by the different inputs they use as proxies for the solar poloidal field .we also hypothesize that stronger diffusion in the model destroys polar field faster , and that flux transport by diffusion across the scz in this model short - circuits the meridional circulation conveyor belt , thereby shortening the memory of previous cycles .we perform a detailed analysis to test these ideas . to begin with, we consider a wider parameter space in the present paper , where we study the effect of varying meridional circulation speed and scz diffusivity on the amplitude and period of the solar cycle . in these simulations , we keep all other parameters the same , allowing a direct comparison between advection - dominated and diffusion - dominated scz regimes which has previously been clouded by other differences between models .then we introduce stochastic fluctuations in the model -effect to self - consistently generate cycle - amplitude variations as a completely theoretical construct towards studying cycle - to - cycle variations , in contrast to using diverse observed proxies for time - varying poloidal fields .subsequently , we perform a comparative analysis of the persistence of memory in this stochastically forced dynamo model in both the advective and diffusive flux - transport dominated regimes. therefore , in spirit , this paper deals with the underlying physics of solar cycle predictability , and is not concerned with making a prediction itself .the layout of this paper is as follows .the main features of the model are summarised in section [ sec : model ] , and the results of the parameter - space study are presented in section [ sec : results ] .these results are interpreted in section [ sec : regimes ] . in section [ sec : timedelays ] we analyze the persistence of memory in the advection versus diffusion dominated regimes .we conclude in section [ sec : conclusion ] with a discussion on the relevance of this work in the context of developing predictive capabilities for the solar activity cycle .we use the solar dynamo code _ surya _ , which has been studied extensively in different contexts ( e.g. , , ) , and is made available to the public on request .the major ingredients of the code include an analytic fit to the helioseismically - determined differential rotation profile , a single - cell meridional circulation in the scz , different diffusivities for the toroidal and poloidal fields , a buoyancy algorithm to model radial transport of magnetic flux , and a babcock - leighton ( bl ; , ) type -effect localized near the surface layer ( signifying the generation of poloidal field due to the evolution of tilted bipolar sunspot pairs under surface flux transport ) .the code solves the kinematic mean - field dynamo equations for an axisymmetric magnetic field , which may be expressed in spherical coordinates as where and ] , with a new value after each coherence time .although for our purposes this is essentially a device for changing the cycle properties from one cycle to the next , there is a strong physical basis for stochastic variations in which have been invoked in several previous studies .our model uses a babcock - leighton -effect where poloidal field is generated at the surface from the decay of tilted active regions .thus stochastic variations in the coefficient are natural , because it arises from the cumulative effect of a finite number of discrete flux emergence events ( active region eruptions with varying degrees of tilt ) . to compare the two regimes we consider two runs , both with .the circulation speed is kept constant throughout each run , and only the effect is varied .run 1 has , so is diffusion - dominated , while run 2 has and is advection - dominated .the coherence time is set to years in run 1 and years in run 2 , so as to keep the ratio of the former to the cycle duration roughly the same in each case .we note that although the exact value of the coherence time is not important for our study ( and is introduced just as a means to enable sufficient fluctuations ) , the timescale on the order of a year is chosen to reflect that the bl -effect is a result of surface flux transport processes ( diffusion , meridional circulation and differential rotation ) which can take up to a year to generate a net radial ( component of the poloidal ) field from multiple flux emergence events . in this sectionwe compare the peak surface radial flux for cycle with the peak toroidal flux for cycles , , , and . the toroidal flux is defined as before by integrating over the region to , to . the radial flux is found by integrating over the solar surface between to , ( i.e. , latitudes to ) .note that the peak toroidal flux precedes the peak surface radial flux for the same cycle , which has the same sign . the poloidal field then produces the toroidal field for cycle with the opposite sign .we measure the correlation of the surface radial flux for cycle with the toroidal flux of different cycles , comparing the absolute value of each total signed flux .both runs were computed for a total of 275 cycles with fluctuating , so as to produce meaningful statistics for each of the dynamo regimes .the results are illustrated in figures [ fig : cordiff ] and [ fig : coradv ] as scatter - plots of for different cycles against .the ( non - parametric ) spearman s rank correlation coefficient is given above each plot , along with its significance level .the correlation coefficients are summarised in table [ tab : correlations ] , where the pearson s linear correlation coefficient is also given for comparison .although the latter is less reliable , as it assumes a linear relation , it agrees well with in each case .the results show a clear difference between the two regimes .the advection - dominated regime shows significant correlations at all 4 time delays , apparently suggesting that the memory of past poloidal field survives for at least 3 cycles ; however , more on this later .the diffusion - dominated regime has a strong correlation only between and , suggesting that the dominant memory relates to just a one cycle time - lag , although very weak correlations are also found with and . in both regimesthe strongest relation is the positive correlation between and .this is to be expected as this is the more deterministic phase of the cycle where toroidal fields ( of cycle , say ) are inducted from the older cycle poloidal field via the relatively steady differential rotation .note however that the two fluxes do not have to be directly coupled , in that the two fluxes may be positively correlated because they are both created from the mid - latitude poloidal field of cycle ( generated by the -effect ) .the polar flux arises through poleward meridional transport of the cycle poloidal field , while the toroidal flux is generated from cycle poloidal field that is diffusively transported across the convection zone .this is particularly the case in the diffusion - dominated regime .nonetheless , even this indirect scenario suggests that the strongest correlation should in fact be between the cycle poloidal field and cycle toroidal field in this class of - dynamo models .the other phase of the cycle , in which the poloidal field is generated by the -effect , is inherently more random due to the fluctuating -effect in these runs .nevertheless , there is a strong positive correlation between and in the advection - dominated regime , while this correlation is largely absent in the diffusion - dominated regime .this we attribute to the relatively stronger role of advective flux transport in the advection - dominated regime which implies that a larger fraction of the original toroidal flux that has buoyantly erupted is transported to the polar regions by the circulation . in effecttherefore , the advection - dominated regime allows correlations to propagate _ in both phases of the cycle _ , whereas the diffusion - dominated case allows correlations to propagate only in the poloidal - to - toroidal phase .the other correlation is broken in the diffusion - dominated regime because the advection is short - circuited by direct diffusion , which transports flux downwards and equatorward where it is cancelled by oppositely signed flux from the other hemisphere .this explains how the correlations can survive for multiple cycles in the advection - dominated regime , but not in the diffusion - dominated regime .significant uncertainties remain in our understanding of the physics of the solar dynamo mechanism , implying that prediction of future solar activity based on physical models is a challenging task . herewe have demonstrated how a flux - transport dynamo model behaves differently in advection and diffusion dominated regimes .such differences , amongst others , have previously led to conflicting predictions of the amplitude of cycle 24 . use an advection - dominated model to predict a much stronger cycle than cycle 23 , whereas use a diffusion - dominated model to predict a much weaker cycle .the latter prediction is somewhat similar in spirit to the precursor methods , which use the polar field at cycle minimum to predict the amplitude of the following cycle .owing to the lack of observations of conditions inside the convection zone , opinions differ as to whether the real solar dynamo is weakly or strongly diffusive ( e.g. ; ) .we find that for low circulation speeds ( in the diffusion - dominated regime ) , the cycle amplitude is an increasing function of , as in the observations of .however , the amplitude curve has a turnover point and is a decreasing function of at higher ( in the advection - dominated regime ) , opposite to the observed correlation .when the diffusivity in the convection zone is increased , the location of this turnover moves to a higher .our extensive analysis shows that this turnover corresponds to the transition between the diffusion and advection dominated regimes . in the diffusion - dominated regime ,faster circulation means less time for decay of the poloidal field , leading to a higher cycle amplitude , whereas in the advection - dominated regime diffusive decay is less important and a faster circulation means less time to induct toroidal field , thus generating a lower cycle amplitude .if the observed statistics of the past cycles as reported by reflect a true underlying trend , then our results imply that the solar dynamo is in fact working in a regime which is dominated by diffusive flux transport in the main body of the scz ( although the cycle period is still governed by the slow meridional circulation counterflow at the base of the scz ) .this conclusion supports the analysis of . through a correlation analysis in a stochasticallyforced version of our model , we have also explored the persistence of memory in the solar cycle for both the diffusion - dominated and advection - dominated regimes .it is this memory mechanism which is understood to lead to predictive capabilities in - dynamo models with spatially segregated source regions for the and effects .this understanding is based on the finite time delay required for flux transport to communicate between these different source regions .we find that the polar field of cycle correlates strongest with the amplitude ( toroidal flux ) of cycle in both the regimes .in the diffusion - dominated regime this is the only significant correlation , indicative of a one - cycle memory only .however , in the advection - dominated case , there are also significant correlations with the amplitude of cycles , , and .in contrast to the correlations that we infer , found that the strongest correlation in their advection dominated model was with a two - cycle time lag .since such correlations lead to predictive capabilities , and obviously seem to be model and parameter - dependent as suggested by our results , such a correlation analysis should be the first step towards any prediction , the latter being based on the former . in hindsight , however , both who use an advection - dominated model and inputs from multiple previous cycles , and who use a diffusion - dominated model and input from only the past cycle to predict the next cycle , appear to be have made the correct choices within their modelling assumptions .note that the memory mechanism in our advection - dominated case appears to have a different cause than that implied by , who invoke the survival of multiple old - cycle polar fields feeding into a new cycle toroidal field .all of the surviving correlations in our advection - dominated regime ( figure [ fig : coradv ] ) are positive ; they do not alternate in sign . this alternation in signwould be expected if bands of multiple older cycle poloidal field survive in the tachocline odd and even cycle poloidal fields would obviously contribute oppositely because of their alternating signs . in that casewe would expect the absolute value of to correlate positively with and and so on , but negatively with and and so on , as evident in the results of ( * ? ? ?* figure 9 ; after accounting for the fact that they use signed magnetic fields ) . rather , in the advection - dominated regime of our model , the correlations appear to persist simply because fluctuations in field strength are passed on in both the poloidal - to - toroidal and toroidal - to - poloidal phases of the cycle , as evidenced by the correlation between amplitude and polar flux of cycle . in a recent analysis , find that the predictive skill of a surface flux transport model similar in spirit to the advection - dominated dynamo model of is contained in the input information of sunspot areas in the declining phase of the cycle .they argue that memory of multiple past cycles , in the form of surviving bands of poloidal field ( its surface manifestations in their case ) , need not be the only reason behind the predictive capability of the advection - dominated dynamo model of .our analysis of the advection - dominated regime supports this suggestion of .coming back to the diffusion - dominated regime , our comparative analysis indicates that in this case , the memory of past cycles is governed by downward diffusion of poloidal field into the tachocline which primarily results in a one - cycle memory . the fact that diffusion is an efficient means for transporting fluxis often ignored , especially in this era of advection - dominated models ; however , we find that diffusive flux transport is quite efficient .the identification of this one cycle memory in our stochastically forced model contradicts who claim that prediction is not possible in this regime .as long as the source regions are spatially segregated , and one of the source effects is observable and the other deterministic , flux transport - dynamos will inherently have predictive skills no matter what physical process ( i.e. , circulation , or diffusion , or downward flux - pumping ) is invoked to couple the two regions .we may also point out that in the context of cycle - to - cycle correlations , downward flux pumping would have the same effect as diffusion in that it also acts to short - circuit the meridional circulation conveyor belt .so although downward flux pumping differs from diffusive transport because in the latter case the fields may reduce in strength due to decay , the overall persistence of memory is expected to be similar if diffusive flux transport was replaced or complemented by downward flux pumping . in summary , our analysis has served both to explore the diffusion dominated and advection dominated regimes within the framework of a bl type dynamo , and to demonstrate how the memory of the dynamo may be different in these two regimes .based on our analysis we assert that diffusive flux transport in the scz plays an important role in flux transport dynamics , even if the dynamo cycle period is governed by the meridional flow speed .in fact , the observed solar cycle amplitude - period dependence may arise more naturally in the diffusion - dominated regime , as discussed earlier. taken together therefore , we may conclude that diffusive flux transport is a significant physical process in the dynamo mechanism and this process leads primarily to a one - cycle memory which may form the physical basis for solar cycle predictions , if other physical mechanisms involved in the complete dynamo chain of events are well understood .separate , detailed examinations of these other related physical mechanisms will be performed in the future .lrrrrrrr & 0.185 & & _ 0.287 _ & & 0.653 & & _ 0.778 _ + & 0.737 & & _ 0.706 _ & & 0.805 & & _ 0.851 _ + & -0.040 & & _ 0.028 _ & & 0.356 & & _ 0.546 _ + & 0.195 & & _ 0.202 _ & & 0.237 & & _ 0.417 _ + & 0.036 & & _ 0.056 _ & & 0.183 & & _ 0.357 _ + & 0.107 & & _ 0.073 _ & & 0.214 & & _ 0.316 _ +
|
the predictability , or lack thereof , of the solar cycle is governed by numerous separate physical processes that act in unison in the interior of the sun . magnetic flux transport and the finite time delay it introduces , specifically in the so - called babcock - leighton models of the solar cycle with spatially segregated source regions for the and -effects , play a crucial rule in this predictability . through dynamo simulations with such a model , we study the physical basis of solar cycle predictions by examining two contrasting regimes , one dominated by diffusive magnetic flux transport in the solar convection zone , the other dominated by advective flux transport by meridional circulation . our analysis shows that diffusion plays an important role in flux transport , even when the solar cycle period is governed by the meridional flow speed . we further examine the persistence of memory of past cycles in the advection and diffusion dominated regimes through stochastically forced dynamo simulations . we find that in the advection - dominated regime , this memory persists for up to three cycles , whereas in the diffusion - dominated regime , this memory persists for mainly one cycle . this indicates that solar cycle predictions based on these two different regimes would have to rely on fundamentally different inputs which may be the cause of conflicting predictions . our simulations also show that the observed solar cycle amplitude - period relationship arises more naturally in the diffusion dominated regime , thereby supporting those dynamo models in which diffusive flux transport plays a dominant role in the solar convection zone .
|
astronomy is facing the need for radical changes .when dealing with surveys of up to sources , one could apply for telescope time and obtain an optical spectrum for each one of them to identify the whole sample .nowadays , we have to deal with huge surveys ( e.g. , the sloan digital sky survey [ sdss ; ] , the two micron all sky survey [ 2mass ; ] , the massive compact halo object [ macho ; e.g. , ] survey ) , reaching ( and surpassing ) the 100 million objects . even at , say , 3,000 spectra at night , which is only feasible with the most efficient multi - object spectrographs and for relatively bright sources , such surveys would require more than 100 years to be completely identified , a time which is clearly much longer than the life span of the average astronomer ! but even taking a spectrum might not be enough to classify an object .we are in fact reaching fainter and fainter sources , routinely beyond the typical identification limits of the largest telescopes available ( approximately 25 magnitude for 2 - 4 hour exposures ) , which makes `` classical '' identification problematic .these very large surveys are also producing a huge amount of data : it would take about two months to download at 1 mbytes / s ( an extremely good rate for most astronomical institutions ) the data release 2 ( dr2 ; http://www.sdss.org/dr2/ ) sdss images , about two weeks for the catalogues .the images would fill up 1,000 dvds ( 500 if using dual - layer technology ) . and the final sdss will be about three times as large as the dr2 . these data , once downloaded , need also to be analysed , which requires tools which may not be available locally and , given the complexity of astronomical data , are different for different energy ranges .moreover , the breathtaking capabilities and ultra - high efficiency of new ground- and space - based observatories have led to a `` data explosion '' , with astronomers world - wide accumulating terabyte of data per night .for example , the european southern observatory ( eso)/space telescope european coordinating facility ( st - ecf ) archive is predicted to increase its size by two orders of magnitude in the next eight years or so , reaching terabytes .finally , one would like to be able to use all of these data , including multi - million - object catalogues , by putting this huge amount of information together in a coherent and relatively simple way , something which is impossible at present .all these hard , unescapable facts call for innovative solutions . for example, the observing efficiency can be increased by a clever pre - selection of the targets , which will require some `` data - mining '' to characterise the sources properties before hand , so that less time is `` wasted '' on sources which are not of the type under investigation .one can expand this concept even further and provide a `` statistical '' identification of astronomical sources by using all the available , multi - wavelength information without the need for a spectrum .the data - download problem can be solved by doing the analysis where the data reside . andfinally , easy and clever access to all astronomical data worldwide would certainly help in dealing with the data explosion and would allow astronomers to take advantage of it in the best of ways .the name of the solution is the virtual observatory ( vo ) .the vo is an innovative , evolving system , which will allow users to interrogate multiple data centres in a seamless and transparent way , to utilise at best astronomical data . within the vo , data analysis tools and models , appropriate to deal also with large data volumes ,will be made more accessible .new science will be enabled , by moving astronomy beyond `` classical '' identification with the characterisation of the properties of very faint sources by using all the available information .all this will require good communication , that is the adoption of common standards and protocols between data providers , tool users and developers .this is being defined now using new international standards for data access and mining protocols under the auspices of the recently formed international virtual observatory alliance ( ivoa : http://ivoa.net ) , a global collaboration of the world s astronomical communities .one could think that the vo will only be useful to astronomers who deal with colossal surveys , huge teams and terabytes of data !that is not the case , for the following reason .the world wide web is equivalent to having all the documents of the world inside one s computer , as they are all reachable with a click of a mouse .similarly , the vo will be like having all the astronomical data of the world inside one s desktop .that will clearly benefit not only professional astronomers but also anybody interested in having a closer look at astronomical data .consider the following example : imagine one wants to find _ all _ high - resolution spectra of a - type stars available in _ all _ astronomical archives in a given wavelength range .one also needs to know which ones are in raw or processed format , one wants to retrieve them and , if raw , one wants also to have access to the tools to reduce them on - the - fly . at present , this is extremely time consuming , if at all possible , and would require , even to simply find out what is available , the use a variety of search interfaces , all different from one another and located at different sites .the vo will make it possible very easily .the status of the vo in europe is very good . in addition to seven current national vo projects , the european fundedcollaborative astrophysical virtual observatory initiative ( avo : http://www.euro-vo.org ) is creating the foundations of a regional scale infrastructure by conducting a research and demonstration programme on the vo scientific requirements and necessary technologies .the avo has been jointly funded by the european commission ( under the fifth framework programme [ fp5 ] ) with six european organisations participating in a three year phase - a work programme .the partner organisations are eso in munich , the european space agency , astrogrid ( funded by pparc as part of the united kingdom s e - science programme ) , the cnrs - supported centre de donnees astronomiques de strasbourg ( cds ) and terapix astronomical data centre at the institut dastrophysique in paris , the university louis pasteur in strasbourg , and the jodrell bank observatory of the victoria university of manchester .the avo is the definition and study phase leading towards the euro - vo - the development and deployment of a fully fledged operational vo for the european astronomical research community .a science working group was also established two years ago to provide scientific advice to the project .the avo project is driven by its strategy of regular scientific demonstrations of vo technology , held on an annual basis in coordination with the ivoa .for this purpose progressively more complex avo demonstrators are being constructed .the current one , a downloadable java application , is an evolution of aladin , developed at cds , and has become a set of various software components , provided by avo and international partners , which allows relatively easy access to remote data sets , manipulation of image and catalogue data , and remote calculations in a fashion similar to remote computing .the avo held its second demonstration , avo 1st science , on january 27 - 28 , 2004 at eso .the demonstration was truly multi - wavelength , using heterogeneous and complex data covering the whole electromagnetic spectrum .these included : merlin , vla ( radio ) , iso [ spectra and images ] and 2mass ( infrared ) , usno , eso 2.2m / wfi and vlt / fors [ spectra ] , and hst / acs ( optical ) , xmm and chandra ( x - ray ) data and catalogues .two cases were dealt with : an extragalactic case on obscured quasars , centred around the great observatories origin deep survey ( goods ) public data , and a galactic scenario on the classification of young stellar objects .the extragalactic case was so successful that it turned into the first published science result fully enabled via end - to - end use of vo tools and systems , the discovery of high - power , supermassive black holes in the centres of apparently normal looking galaxies ( ( * ? ? ? * padovani 2004 ) ) .the avo prototype made it much easier to classify the sources we were interested in and to identify the previously known ones , as we could easily integrate all available information from images , spectra , and catalogues at once .this is proof that vo tools have evolved beyond the demonstration level to become respectable research tools , as the vo is already enabling astronomers to reach into new areas of parameter space with relatively little effort .i have used the avo prototype to tackle two problems of a - stars research , namely , establishing membership to an open cluster and assessing if chemically peculiar a - type stars are more likely to be x - ray emitters than normal a - type stars .cluster membership is vital to determine the distance , and therefore absolute magnitude , and age of a - type stars , as discussed by richard monier ( ) and stefano bagnulo ( ) at this conference . in short, open clusters play a crucial role in stellar astronomy because , as a consequence of the stars having a common age , they provide excellent natural laboratories to test theoretical stellar models .the avo prototype can be of help in determining if a star does belong to an open cluster , at various levels .i have chosen the pleiades as the target , since even extragalactic astronomers like me know about it ( although the value of its parallax is strongly debated : see , e.g. , ) !i will also use this example to describe the capabilities of the tool . a step - by - step guide which should allow anyone to reproducewhat i have done here with the avo prototype can be found at http://www.eso.org//vo.html .mas , consistent with the cluster value of mas , and the presence of many foreground and some background stars.,height=491 ] mas , consistent with the cluster value of mas.,title="fig:",height=370 ] mas , consistent with the cluster value of mas.,title="fig:",height=359 ] mas.,title="fig:",height=366 ] mas.,title="fig:",height=362 ] we start by loading a second palomar observatory sky survey ( poss ii ) image of the pleiades . since we are interested in cluster membership , we need distance information .the avo prototype has a direct link to vizier ( http://vizier.u-strasbg.fr/ ) , a service which provides access to the most complete library of published astronomical catalogues and data tables available on - line .we then search for all vizier catalogues which provide parallax information and find the hipparcos catalogue ( ) , which we load into the prototype .all hipparcos sources are automatically overlaid on the poss ii image .we can now very easily plot an histogram of the hipparcos parallaxes by using voplot , the graphical plug - in of the prototype .most stars in the image have parallaxes mas , consistent with the cluster value of mas ( ( * ? ? ?* robichon 1999 ) ) , but there are also many foreground and some background stars ( fig .[ fig : histo ] ) .we now want to plot a colour magnitude diagram but first we need to correct for reddening ( which is 0.04 for this cluster ) the observed colour given in the hipparcos catalogue .we then create a new column , , and then plot the observed vs. . the zero age mean sequence ( zams ) , flipped because we are using observed and not absolute magnitudes , is clearly visible ( fig .[ fig : zams ] , top ) . we can now have a very nice , visual match between zams and cluster membership . by selecting in voplot bright stars which are on the zams ( top of fig .[ fig : zams ] , bottom left corner ) , the corresponding sources are highlighted in the image .these are mostly in the centre of the cluster .by selecting them with the cursor one can see that most of them have parallaxes mas , as expected ( fig .[ fig : zams ] ) . on the other hand, if we now select in voplot the sources off the zams , in the poss ii image one can see that they are mostly at the edge of the field and , looking at their parallaxes , generally foreground sources , with a couple of background ones and only a few possible members ( fig .[ fig : nozams ] ) . at this pointone could do things properly , that is use the hipparcos data and the mean radial velocity of the cluster centre , together with eq .( 1 ) of , to add new columns with the relevant parameters to the hipparcos catalogue , and determine membership based on a statistical criterion . at present, this would be quite cumbersome , although still possible .very soon , however , one will be able to add such functionality to the vo as a `` web service '' .web services will promote growth of the vo in the way that web pages grew the world wide web .for example , a tool to determine cluster membership could be `` published '' to the vo as a service to which astronomers can send their input , in an appropriate format , and then receive the output , e.g. , a list of cluster members and non - members . using the avo prototype , one can also look for data available for selected sources at various mission archives .for example , in our case one could select some sources in the image and then select eso , the hubble space telescope ( hst ) , and the international ultraviolet explorer ( iue ) under `` missions in vizier '' .the pointings for these three missions for the sources under examination would then be overlaid in the image .selecting one of these pointings provides , in some cases ( e.g. , hst and iue ) , links to preview images , so that one can have a `` quick look '' at the archival data . by clicking on the hst `` dataset '' columnthe default web browser starts up and the dataset page at the multimission archive at stsci ( mast ) is made available . from thereone can have access to all papers published using those data .so from the avo prototype one is only two `` clicks '' away from the journal articles which have used the mast data of the astronomical sources in the image ! to get a flavour of the wealth of archival data available for a - type stars , i have also cross - correlated the sky2000 catalogue with the mast holdings using the service available at http://archive.stsci.edu/search/sky2000.html . out of a - type stars , it turns out that 754 have iue data , for a total of 10,000 observations , while 128 have non - iue data ( fos , ghrs , stis , fuse , euve , copernicus , hut , wuppe , and befs ; see the mast site at http://archive.stsci.edu/ for details on all of these missions ) , for a total of 1,700 observations , of which are spectra .how many more data are available in the other astronomical archives ?only the vo will allow us to answer that question in a relatively simple way .the issue of the x - ray emission of a - type stars is a long standing one ( see , e.g , ( * ? ? ?* simon , drake , & kim 1995 ) ) .while it is clear that these objects can be x - ray sources , it has been suggested that their x - ray emission is not associated with the star itself but might come from a binary companion . analyzed _ chandra _ observations of the young open cluster ngc 2516 and detected only twelve a stars , out of 58 , while six out of eight of the chemically peculiar ( cp ) a - stars were detected ( a difference significant at the level ) .it has then been suggested , also on the basis of previous results ( e.g. , ) , that cp a - type stars are more easily detected in the x - rays than normal a stars , although the astrophysical implications of this result would not be straightforward .the avo prototype provides a relatively simple way to address directly and with sound statistics the following question : `` are cp a - type stars more likely to be x - ray emitters than normal a - type stars ? '' .i first started by loading from vizier the catalogue of cp stars , which include 6,684 sources , into the prototype .i then proceeded to select the 4,736 a - type stars by using the `` filter '' option .as comparison samples i used the henry draper ( hd ) and sao star catalogues , which contain 272,150 and 258,944 sources , out of which i selected the 72,154 and 47,230 a - stars respectively .i then loaded wgacat ( ( * ? ? ?* white , giommi , & angelini 1995 ) ) , a catalogue of all _ rosat _ position sensitive proportional counter ( pspc ) observations , covering approximately 10% of the sky and including about 92,000 serendipitous sources ( excluding the targets , which could bias the result ) .the results of the cross - correlations , done using the cross - match plug - in ( fig . [ fig : cross ] ) , are as follows ( all errors are and spurious sources have been subtracted off by doing the match with a shift of 1 degree in the coordinates ) : 1 .cp a - stars with wgacat : 74/4736 matches , i.e. , ; 2 . hd a - stars with wgacat : 368/72154 , i.e. , ; correcting statistically for cp stars contamination , assumed to be at the level ( e.g. , ( * ? ? ?* monin , fabrika , & valyavin 2002 ) ) , one gets ; 3 . sao a - stars with wgacat : 327/47230 , i.e. , ; correcting statistically for cp stars contamination , one gets .therefore , one can conclude that _cp a - stars are 3 ( sao ) to 4 ( hd ) times more likely to be x - ray sources than normal a - type stars _ with very high significance . selecting only the magnetic stars in ,which can be identified has having sr , cr , eu , si , he , ti , and ca classification peculiarities ( e.g. , ) , one finds 22/1433 matches , that is a detection rate , not significantly different from that of all cp a - stars .it then appears that the presence of a magnetic field does not play a role in triggering x - ray emission , a somewhat puzzling result which deserves further investigation .a related issue is that of radio emission .the strong fields present in the magnetic subclass of cp stars ( see above ) , in fact , should give rise to radio emission , for example via the gyrosynchrotron mechanism ( ) .i have then cross - correlated the a - type stars in the catalogue with two large - area radio catalogues , namely the nrao - vla sky survey ( nvss ; ) , which covers the sky north of down to mjy at 1.4 ghz , and the faint images of the radio sky at twenty - centimeters ( first ; ) , which covers of the sky , mostly in the north galactic cap , down to mjy 1.4 ghz .i found no matches .in retrospect , this is not surprising as the few cp stars detected so far have radio fluxes mjy ( e.g. , ) .the main conclusions are as follows : 1 . we need to change the way we do astronomy if we want to take advantage of the huge amount of data we are being flooded with . the way to do that is through the virtual observatory .2 . the virtual observatory will make the handling and analysis of astronomical data and tools located around the world much easier , enabling also new science .3 . everybody will benefit , including a - type star researchersvirtual observatory tools are available now to facilitate astronomical research and , as i have shown , can also be applied to a - stars .visit http://www.euro-vo.org/twiki/bin/view/avo/swgdownload to download the avo prototype .i encourage astronomers to download the prototype , test it , and use it for their own research . for any problems with the installation and any requests , questions , feedback , and comments you might have please contact the avo team at twiki-vo.org .( please note that this is still a prototype : although some components are pretty robust some others are not . )i would like to thank the organizers of the conference for their kind invitation , which has allowed an extragalactic astronomer like me to learn about stars !i am also grateful to stefano bagnulo for his help in preparing my talk and to mark allen for reading this paper .i have made extensive use of the cds vizier catalogue tool , simbad and the aladin sky atlas service .the astrophysical virtual observatory was selected for funding by the fifth framework programme of the european community for research , technological development and demonstration activities , under contract hpri - ct-2001 - 50030 .
|
the virtual observatory ( vo ) will revolutionise the way we do astronomy , by allowing easy access to all astronomical data and by making the handling and analysis of datasets at various locations across the globe much simpler and faster . i report here on the need for the vo and its status in europe , including the first ever vo - based astronomical paper , and then give two specific applications of vo tools to open problems of a - stars research .
|
in 1900 louis bachelier in his dissertation showed that the market prices behaves like a dust particles performing a one - dimensional brownian motion .nowadays , we are used to approximate the logarithmic price movements a wiener process .these movements can be described by system of equations where equal time , is the growing rate ( drift ) , is a volatility rate and represent the standard brownian motion . in the above process for the exact moment , the probability density function of finding the randomly moving particle at the point ( price logarithm ) is given as the gaussian function because the time evolution of the logarithm of stock price is described by a winer process , its current price is given by where is the stock price in the moment .assuming that we are dealing with effective market and is no possibility of arbitrage , and assuming that the parameters are fixed in this model , the necessary estimators of these parameters we can be found by econometric methods .the mean value of the stock price at the moment must equal and the riskless interest rate is constant in the period ] , because of so this call option want be exercise and hence to that its value will be .the option fair price at should be equal to the discounted mean value ( assume that the interest rate is constant $ ] in the term ) ^{+}= \text{e}^{-rt}\int _{ -\infty}^{\infty}[s_{0}\text{e}^{x}-k]^{+}f(x , t)\,dx \label{pierwsza}\ ] ] calculated of integral over the , we are getting the formula in this way , we have received the famous black - scholes formula on the value european call option , where the cumulative distribution function is the standard normal distribution .the ornstein - uhlenbeck process describe a particle behavior called by physicists the rayleigh particle . it can be described by the system of equations where and are some positive constant , whereas means the brownian motion .the transition probability density of the random variable in the time , which is a fundamental solution of the corresponding the fokker - planck equation , is given by the formula the parameter modifies the time scale in which we can examine the particle place .for very small values of we obtain the short time scale in which the ornstein - uhlenbeck process , which describe the velocity evolution of the rayleigh particle , approximate to the brownian motion is a variable which corresponds the velocity of the particle . ] .+ if we assume that the logarithmic price of the stock behaves similarly to the rayleigh particle and we also take effective market into consideration , so we can count the logarithmic distribution of an stock based on ornstein - uhlenbeck model .let us for . then, making calculations similar for the brown particle , we get and in result the probability density of the logarithm of stock price is let us see that , near , see appendix . while near function approach to gauss factorization with the same parameters , which corresponds the price equality which result from the market expectations , so on the base of the fundamental analyze . from herethe ornstein - uhlenback process would give the realistic situations of the market in the mezzo - scale , temporal , it means in the time where there is not enough to approximate by a wiener - bachelier process , but so small that in the result of the processes of economics and innovations , there has not been changed the market expectations , so like the asymptotic state of equality on the `` new '' one .in the quantum game theory ornstein - uhlenback process has the interpretation of non - unitary tactics , leading to a new strategy .we call tactics characterized by a constant inclination of an abstract market player ( rest of the world ) to risk and maximal entropy thermal tactics . is a hamilton operator that specifies the model , whereas are hermitian operators of supply and demand .these traders adopt such tactics that the resulting strategy form a ground state of the risk inclination operator .therefor , thermal tactics are represented by an operator annihilates the minimal risk strategy . ] where and .the variable describes arithmetic mean deviation of the logarithm of price from its expectation value .quantum strategies create unique opportunities for making profits during intervals shorter than the characteristic thresholds for the brownian particle . to describe the evolution of the market price in the quantum models we use the rayleigh particle , it is the non - unitary thermal tactics approaching equilibrium state ) , concentrated around the price logarithm foretell in the fundamental analysis with the market in balance remaining .meanwhile after the some time as a result of new information the market finds the equality in the other part of the price logarithm , so it shifts its ground state and as a effect its center wander as a brown s particle . ] .the variable describes the logarithmic transactional price . operator ( tactics ) transforms the strategy and , in the moment it has the form where tactics is called the ornstein - uhlenbeck process .adoption of the thermal tactics means that traders have in view minimization of the risk within the available information on the market .so we adopt such a normalization of the operator of the tactics so that the resulting strategy is its fixed point .conditions for the fixed point measures rate at in which the market is achieves the strategy , which is a fixed point of thermal tactics . ]these tactics allows to interpret it as probabilistic measure describe in the formula , see .let us move to the european call option pricing underlying on company s stock which is not paying dividend and based on changing prices which are not the brownian motion but the ornstein - uhlenbeck process modeled . to achieve the black - scholes formula , it is enough to count the integral for the modified density of the probability , assuring no arbitrage definite formula .the formula for the price of the option takes the form ^{+}= \text{e}^{-rqt}\int _ { -\infty}^{\infty}[s_{0}\text{e}^{x}-k]^{+}f_q(x , t)\,dx \label{endna}\ ] ] ^{+}\,\text{e}^{-\frac{(x - rqt + \sigma^2\text{e}^{-q t}\sinh q t)^2}{2\sigma ^{2}(1-\text{e}^{-2q t})}}\,dx\ ] ] the difference between the logarithmic price the european call option described by the ornstein - uhlenback process and logarithmic price call option described by the wiener - bachelier process is present in figure 1 .the corrections to the bachelier model should matter for mezzo - scale ., for , , .,width=340,height=226 ]we have proposed the alternative description of the time evolution of market price that is inspired by quantum mechanical motion of physical particles . quantum market gamesbroaden our horizons and quantum strategies create unique opportunities for making profits during intervals shorter than the characteristic thresholds for an effective market ( brownian motion ) . on such market pricescorrespond to rayleigh particles approaching equilibrium state .observations of the prices on the quantum market can result quantum zenno s effect , which should broaden the range of correctness at the market description in the mezzo - scale .sometimes it has side effects in the form of big jumps like crashes in the market expectations in relation to new asymptomatic equilibrium state .quantum arbitrage based on such phenomena seems to be feasible .the extra possibilities offered by quantum game strategies can lead to more successful outcomes than purely classical ones .this has far - reaching consequences for trading behavior and could lead to fascinating effects in quantum - designed financial markets .quantum market games suggest that such trading activity would take place on a `` quantum - board '' that contained the sets of all possible states of the trading game .however , to implement such a game would require dramatic advances in technology , see .but it is possible that some quantum effect are already being observed .let us quote the editor s note to complexity digest 2001.27(4 ) `` it might be that while observing the due ceremonial of everyday market transaction we are in fact observing capital flows resulting from quantum games eluding classical description .if human decisions can be traced to microscopic quantum events one would expect that `` nature '' would have taken advantage of quantum computation in evolving complex brains . in that senseone could indeed say that quantum computers are playing their market games according to quantum rules . ''using the theorem about moments , the density of probability is unambiguously definite by its cumulative moments .the moment generating function for the wiener - bachelier density ( [ fwb ] ) , for the exact time is equal the first cumulative moment is the second cumulative moment is the rest of the cumulative moments vanish .the first cumulative moment measures the mean value and the second one measures the risk . for the density of the ornstein - uhlenbeck given by formula ( [ osta ] )the generating function is the first cumulative moment equals the second cumulative moment equals whereas the others equal zero .as you can see the moment in both densities differ by a non linear time modification .the taylor series expansion of is given by \,.\ ] ] for cumulative moments of the ornstein - uhlenbeck and wiener - bachelier process are equal , so from the theorems of the moments we get .e. e. haven , _ a discussion on embedding the black - sholes option pricing model in a quantum physics setting _, physica a * 304 * ( 2002 ) 507 .j. masoliver and j. perello , _ option pricing and perfect hedging on correlated stock _ , physica a * 330 * ( 2003 ) 622 .a. zellinger , _ the quantum centennial _ , nature * 408 * ( 2000 ) 639 .e. w. piotrowski and j. sadkowski , _ quantum diffusion of prices and profits _, physica a * 345 * ( 2005 ) 185 .b. e. baaquie , _ quantum finance _ , cambridge ( 2004 ) .g. shafer , _ black - scholes pricing : stochastic and game - theoretic _ , rutgers business school ( 2002 ) .r. n. mantegna and h. e. stanley , _ an introduction to econophysics .correlations and complexity in finance quantum physics _ , cambrige uniwersity press ( 2000 ) .n. g. van kampen , _stochastic processes in physics and chemistry _ , elsevier , new york ( 1983 ) ., m. hazewinkel ( ed . ) , dordrecht , amsterdam ( 1997 ) .e. w. piotrowski and j. sadkowski , _ quantum games in finance _, quantitative finance * 4 * ( 2004 ) 61 .j. glimm and a. jaffe , _ quantum physics .a functional integral point of view _ , springer - verlag , new york ( 1981 ) .p. darbyshire , _ quantum physics meets classical finance _ , physics world * 25 * ( may 2005 ) .w. feller , _ an introduction to probability theory and its applications _, new york ( 1966 ) .
|
in this work we propose a option pricing model based on the ornstein - uhlenbeck process . it is a new look at the black - scholes formula which is based on the quantum game theory . we show the differences between a classical look which is price changing by a wiener process and the pricing is supported by a quantum model . pacs numbers : 02.50.-r , 02.50.le , 03.67.-a , 03.65.bz * introduction * option trading experienced a gigantic growth with the creation of the chicago board of options exchange in 1973 . since this moment there is an on going demand for derivatives instruments . and this induce financiers , mathematicians and physicians to the wider and deeper research of the finance instruments of the price changing dynamics . owing to the fact , that an analogy has been discovered between market prices behavior and the dust particle motion model by a wiener process . this observation revolutionized derivatives pricing by developing a pioneering formula for evaluating paying no dividend stock options , whose creators in 1997 have way a nobel prize for . the black - scholes formula is the most popular deployed computational model . we propose an alternative description of the time evolution of market price making corrections in the wiener - bachelier model and which follows from the ornstein - uhlenback process . this process has been successfully used by vasicek in 1997 for modeling the short time interest rate . modifications have already been propose in works e.g. . in the latest year have appeared variant a game theory based on the quantum formalism it qualitatively broaden the capabilities of this discipline describing the strategy which can not be realized in classical models . game theory describes conflict scenarios between a number of individuals or groups who try to maximize their own profit , or minimize profits made by their opponents . however , by adopting quantum trading strategies it seems that players can make more sophisticated decisions , which may lead to better profit opportunities . the success of the quantum information theory ( quantum algorithm or quantum cryptography ) could make these futuristic - sounding quantum trading systems a reality , due to quantum computer development it will be possible to better model the market . quantum market model exist and in this kind of market we can valuation the derivative instrument . the pricing of derivative securities as a problem of quantum mechanics presented already belal e. baaquie , see . in the first section we are going to quote the logarithmic equation which fulfills the logarithmic stock price and we will be able to find the formula of these prices . in the second section we will be able to find the european option price supported by wiener - bachelier model and we will be able to receive the famous formula of black - scholes . next we give the analogical probability like in the first section , but it will be supported by ornstein - uhlenbeck process . in the fourth section we interpret of the ornstein - uhlenbeck model in terms of quantum market games theory as a non - unitary thermal tactic . in the fifth section with a little help of quantum model we find a model which describes the european option pricing .
|
modern graphics processors ( _ _ gpu__s ) have evolved into highly parallel and fully programmable architectures .current many - core gpus such as nvidia s gtx and tesla gpus can contain up to 240 processor cores on one chip and can have an astounding peak performance of up to 1 tflop .the upcoming fermi gpu recently announced by nvidia is expected to have more than 500 processor cores .however , gpus are known to be hard to program and current general purpose ( i.e. non - graphics ) gpu applications concentrate typically on problems that can be solved using fixed and/or regular data access patterns such as image processing , linear algebra , physics simulation , signal processing and scientific computing ( see e.g. ) .the design of efficient gpu methods for discrete and combinatorial problems with data dependent memory access patterns is still in its infancy .in fact , there is currently still a lively debate even on the best _ sorting _ method for gpus ( e.g. ) . until very recently ,the comparison - based thrust merge method by nadathur satish , mark harris and michael garland of nvidia corporation was considered the best sorting method for gpus .however , an upcoming paper by nikolaj leischner , vitaly osipov and peter sanders ( to appear in proc .ipdps 2010 ) presents a randomized sample sort method for gpus that significantly outperforms thrust merge .a disadvantage of the randomized sample sort method is that its performance can vary with the input data distribution because the data is partitioned into buckets that are created via _randomly _ selected data items . in this paper, we present and evaluate a _ deterministic _ sample sort algorithm for gpus , called gpu bucket sort , which has the same performance as the randomized sample sort method in . an experimental performance comparison on nvidias gtx 285 and tesla architectures shows that for uniform data distribution , the _ _ best case _ _ for randomized sample sort , our deterministic sample sort method is in fact _ exactly _ as fast as the randomized sample sort method of .however , in contrast to , the performance of gpu bucket sort remains the same for any input data distribution because buckets are created deterministically and bucket sizes are guaranteed .the remainder of this paper is organized as follows .section [ sec : review :- gpu - architectures ] reviews the _tesla _ architecture framework for gpus andthe cuda programming environment , and section [ sec : previous - work ] reviews previous work on gpu based sorting .section [ sec : deterministic - sample - sort ] presents gpu bucket sort and discusses some details of our cuda implementation . in section [ sec : experimental - results - and ] , we present an experimental performance comparison between our deterministic sample sort implementation , the randomized sample sort implementation in , and the thrust merge implementation in .in addition to the performance improvement discussed above , our deterministic sample sort implementation appears to be more memory efficient as well because gpu bucket sort is able to sort considerably larger data sets within the same memory limits of the gpus .as in and , we will focus on nvidia s unified graphics and computing platform for gpus known as the _ tesla _ architecture framework and associated _ cuda _ programming model . however , the discussion and methods presented in this paper apply in general to gpus that support the opencl standard which is very similar to cuda . a schematic diagram of the tesla unified gpu architecture is shown in figure [ fig : nvidia - tesla - architecture ] .a tesla gpu consists of an array of streaming processors called _ streaming multiprocessors _ _ _ ( sm__s ) .each sm contains eight processor cores and a small size ( 16 kb ) low latency local_ shared memory _ that is shared by its eight processor cores .all sms are connected to a _global _ dram _ memory _ through an interconnection network .for example , an nvidia geforce gtx 260 has 27 sms with a total of 216 processor cores while gtx 285 and tesla gpus have 30 sms with a total of 240 processor cores .a gtx 260 has approximately 900 mb global dram memory while gtx 285 and tesla gpus have up to 2 gb and 4 gb global dram memory , respectively ( see table [ tab : gpu - performance - characteristics ] ) .a gpu s global dram memory is arranged in independent memory partitions .the interconnection network routes the read / write memory requests from the processor cores to the respective global memory partitions , and the results back to the cores .each global memory partition has its own queue for memory requests and arbitrates among the incoming read / write requests , seeking to maximize dram transfer efficiency by grouping read / write accesses to neighboring memory locations .memory latency to global dram memory is optimized when parallel read / write operations can be grouped into a minimum number of arrays of contiguous memory locations .it is important to note that data accesses from processor cores to their sm s local shared memory are at least an order of magnitude faster than accesses to global memory .this is an important consideration for any efficient sorting method .another critical issue for the performance of cuda implementations is conditional branching .cuda programs typically execute very large numbers of threads .in fact , a large number of threads is critical for hiding latencies for global memory accesses .the gpu has a hardware thread scheduler that is built to manage tens of thousands and even millions of concurrent threads .all threads are divided into blocks of up to 512 threads , and each block is executed by an sm .an sm executes a thread block by breaking it into groups of 32 threads called _ warps _ and executing them in parallel using its eight cores .these eight cores share various hardware components , including the instruction decoder .therefore , the threads of a warp are executed in simt ( single instruction , multiple threads ) mode , which is a slightly more flexible version of the standard simd ( single instruction , multiple data ) mode .the main problem arises when the threads encounter a conditional branch such as an if - then - else statement .depending on their data , some threads may want to execute the code associated with the `` true '' condition and some threads may want to execute the code associated with the `` false '' condition . since the shared instruction decoder can only handle one branch at a time , different threads can not execute different branches concurrently .they have to be executed in sequence , leading to performance degradation .gpus provide a small improvement through an instruction cache at each sm that is shared by its eight cores .this allows for a `` small '' deviation between the instructions carried out by the different cores .for example , if an if - then - else statement is short enough so that both conditional branches fit into the instruction cache then both branches can be executed fully in parallel . however, a poorly designed algorithm with too many and/or large conditional branches can result in serial execution and very low performance .sorting algorithms for gpus started to appear a few years ago and have been highly competitive .early results include gputerasort based on bitonic merge , and adaptive bitonic sort based on a method by bilardi et.al .hybrid sort used a combination of bucket sort and merge sort , and d. cederman et.al . proposed a quick sort based method for gpus . both methods ( )suffer from load balancing problems . until very recently , the comparison - based thrust merge method by nadathur satish , mark harris and michael garland of nvidia corporation was considered the best sorting method for gpus .thrust merge uses a combination of odd - even merge and two - way merge , and overcomes the load balancing problems mentioned above .satish et.al . also presented an even faster gpu radix sort method for the special case of integer sorting . yet , an upcoming paper by nikolaj leischner , vitaly osipov and peter sanders ( to appear in proc .ipdps 2010 ) presents a randomized sample sort method for gpus that significantly outperforms thrust merge .however , as discussed in section [ sec : introduction ] , the performance of randomized sample sort can vary with the distribution of the input data because buckets are created through randomly selected data items .indeed , the performance analysis presented in measures the runtime of their randomized sample sort method for six different data distributions to document the performance variations observed for different input distributions .in this section we present gpu bucket sort , a _ deterministic _ sample sort algorithm for gpus , and discuss its cuda implementation .an outline of gpu bucket sort is shown in algorithm [ alg : deterministic - sample - sort ] below .it consists of a local sort ( step 1 ) , a selection of samples that define balanced buckets ( steps 3 - 5 ) , moving all data into those buckets ( steps 6 - 8 ) , and a final sort of each bucket .[ alg : deterministic - sample - sort ] gpu bucket sort ( deterministic sample sort for gpus ) _ input _ : an array with data items stored in global memory . _ output _ : array sorted . 1. split the array into sublists containing items each where is the shared memory size at each sm .local sort _ : sort each sublist ( =1 , ... , ) locally on one sm , using the sm s shared memory as a cache .3 . _ local sampling _ : select equidistant samples from each sorted sublist ( =1 , ... , ) for a total of samples . 4 ._ sorting all samples _ : sort all samples in global memory , using all available sms in parallel ._ global sampling _ : select equidistant samples from the sorted list of samples. we will refer to these samples as _global samples_. 6 ._ sample indexing _ : for each sorted sublist ( =1 , ... , ) determine the location of each of the global samples in .this operation is done for each locally on one sm , using the sm s shared memory , and will create for each a partitioning into buckets , ... , of size , ... , prefix sum _ : through a parallel prefix sum operation on , ... , , , ... , , ... , , ... , calculate for each bucket ( , , ) its starting location in the final sorted sequence ._ data relocation _ : move all buckets ( , 1 ) to location .the newly created array consists of sublists , ... , where for 1 ._ sublist sort _ : sort all sublists , 1 , using all sms .our discussion of algorithm [ alg : deterministic - sample - sort ] and its implementation will focus on gpu performance issues related to shared memory usage , coalesced global memory accesses , and avoidance of conditional branching .consider an input array with data items and a local shared memory size of data items . in steps 1 and 2 of algorithm [alg : deterministic - sample - sort ] , we split the array a into sublists of data items each and then locally sort each of those sublists . more precisely , we create 16 k thread blocks of 512 threads each , where each thread block sorts one sublist using one sm .each thread block first loads a sublist into the sm s local shared memory using a coalesced parallel read from global memory .note that , each of the 512 threads is responsible for data items .the thread block then sorts a sublist of data items in the sm s local shared memory .we tested different implementations for the local shared memory sort within an sm , including quicksort , bitonic sort , and adaptive bitonic sort . in our experiments ,bitonic sort was consistently the fastest method , despite the fact that it requires work .the reason is that , for step 2 of algorithm [ alg : deterministic - sample - sort ] , we always sort data items only , irrespective of .for such a small number of items the simplicity of bitonic sort , it s small constants in the running time , and it s perfect match for simd style parallelism outweigh the disadvantage of additional work . in step 3 of algorithm [ alg : deterministic - sample - sort ] , we select equidistant samples from each sorted sublist .the choice of value for is discussed in section [ sec : experimental - results - and ] .the implementation of step 3 is built directly into the final phase of step 2 when the sorted sublists are written back into global memory . in step 4, we are sorting all selected samples in global memory , using all available sms in parallel . here , we compared gpu bitonic sort , adaptive bitonic sort based on , and randomized sample sort .our experiments indicate that for up to 16 m data items , simple bitonic sort is still faster than even randomized sample sort due to its simplicity , small constants , and complete avoidance of conditional branching .hence , step 4 was implemented via bitonic sort . in step 5 ,we select equidistant _ global samples _ from the sorted list of samples . here , each thread block / sm loads the global samples into its local shared memory where they will remain for the next step . in step 6 , we determine for each sorted sublist ( =1 , ... , ) of data items the location of each of the global samples in . for each , this operation is done locally by one thread block on one sm , using the sm s shared memory , and will create for each a partitioning into buckets , ... , of size , ... , . here, we apply parallel binary search in for each of the global samples .more precisely , we first take the -th global sample element and use one thread to perform a binary search in , resulting in a location in .then we use two threads to perform two binary searches in parallel , one for the -th global sample element in the part of to the left of location , and one for the -th global sample element in the part of to the right of location .this process is iterated times until all global samples are located in . with this , each is split into buckets , ... , of size , ... ,note that , we do not simply perform all binary searches fully in parallel in order to avoid memory contention within the local shared memory .step 7 uses a prefix sum calculation to obtain for all buckets their starting location in the final sorted sequence .the operation is illustrated in figure [ fig : illustration - of - step-7 ] and can be implemented with coalesced memory accesses in global memory .each row in figure [ fig : illustration - of - step-7 ] shows the , ... , calculated for each sublist .the prefix sum is implemented via a parallel column sum ( using all sms ) , followed by a prefix sum on the columns sums ( on one sm in local shared memory ) , and a final update of the partial sums in each column ( using all sms ) . in step 8 ,the buckets are moved to their correct location in the final sorted sequence .this operation is perfectly suited for a gpu and requires one parallel coalesced data read followed by one parallel coalesced data write operation .the newly created array consists of sublists , ... , where each has at most data items . in step 9, we sort each using the same bitonic sort implementation as in step 4 .note that , since each is smaller than data items , simple bitonic sort is faster for each than even randomized sample sort due to bitonic sort s simplicity , small constants , and complete avoidance of conditional branching .[ fig : illustration - of - step-7],width=264 ]for our experimental evaluation , we executed algorithm [ alg : deterministic - sample - sort ] on three different gpus ( nvidia tesla , gtx 285 , and gtx 260 ) for various data sets of different sizes , and compared our results with those reported in and which are the current best gpu sorting methods .table [ tab : gpu - performance - characteristics ] shows some important performance characteristics of the three different gpus .the tesla and gtx 285 have more cores than the gtx 260 .the gtx 285 has the highest core clock rate and in summary the highest level of core computational power .the tesla has the largest memory but the gtx 285 has the best memory clock rate and memory bandwidth .in fact , even the gtx 260 has a higher clock rate and memory bandwidth than the tesla c1060 .figure [ fig : det-260 - 285-tesla - comp ] shows a comparison of the runtime of our gpu bucket sort method on the tesla c1060 , gtx 260 and gtx 285 ( with 2 gb memory ) for varying number of data items .each data point shows the average of 100 experiments .the observed variance was less than 1 ms for all data points since gpu bucket sort is deterministic and any fluctuation observed was due to noise on the gpu ( e.g. operating system related traffic ) .all three curves show a growth rate very close to linear which is encouraging for a problem that requires work .gpu bucket sort performs better on the gtx 285 than both tesla and gtx 260 .furthermore , it performs better on the gtx 260 than on the tesla c1060 .this indicates that gpu bucket sort is memory bandwidth bound which is expected for sorting methods since the sorting problem requires only very little computation but a large amount of data movement . for individual steps of gpu bucket sort , the order can sometimes be reversed .for example , we observed that step 2 of algorithm [ alg : deterministic - sample - sort ] ( local sort ) runs faster on the tesla c1060 than on the gtx 260 since this step is executed locally on each sm and its performance is largely determined by the number of sms and the performance of the sm s cores .however , the gtx 285 remained the fastest machine , even for all individual steps .we note that gpu bucket sort can sort up to data items within the 896 mb memory available on the gtx 260 ( see figure [ fig : det-260 - 285-tesla - comp ] ) . on the gtx 285 with 2gb memory and tesla c1060 our gpu bucket sort method can sort up to and data items , respectively ( see figures [ fig : det - rand - merge-285]&[fig : det - rand - merge - tesla ] ) .figure [ fig : det - steps ] shows in detail the time required for the individual steps of algorithm [ alg : deterministic - sample - sort ] when executed on a gtx 285 .we observe that _ sublist sort _ ( step 9 ) and _ local sort _ ( step 2 ) represent the largest portion of the total runtime of gpu bucket sort .this is very encouraging in that the `` overhead '' involved to manage the deterministic sampling and generate buckets of guaranteed size ( steps 3 - 7 ) is small .we also observe that the _ data relocation _ operation ( step 8) is very efficient and a good example of the gpu s great performance for data parallel access when memory accesses can be coalesced ( see section [ sec : review :- gpu - architectures ] ) .note that , for algorithm [ alg : deterministic - sample - sort ] the sample size is a free parameter . with increasing , the sizes of sublists created in step 8 of algorithm [ alg : deterministic - sample - sort ] decrease and the time for step 9 decreases as well .however , the time for steps 3 - 7 grows with increasing .this trade - off is illustrated in figure [ fig : runtime - sample - size ] which shows the total runtime for algorithm [ alg : deterministic - sample - sort ] as a function of for fixed .as shown in figure [ fig : runtime - sample - size ] , the total runtime is smallest for , which is the parameter chosen for our gpu bucket sort code .figures [ fig : det - rand - merge-285 ] and [ fig : det - rand - merge - tesla ] show a comparison between gpu bucket sort and the current best gpu sorting methods , randomized sample sort and thrust merge sort .figure [ fig : det - rand - merge-285 ] shows the runtimes for all three methods on a gtx 285 and figure [ fig : det - rand - merge - tesla ] shows the runtimes of all three methods on a tesla c1060 .note that , and did not report runtimes for the gtx 260 . for gpu bucketsort , all runtimes are the averages of 100 experiments , with less than 1 ms observed variance . for randomized samplesort and thrust merge sort , the runtimes shown are the ones reported in and . for thrustmerge sort , performance data is only available for up to data items . for larger values of , the current thrustmerge sort code shows memory errors . as reported in , the current randomized sample sort code can sort up to data items on a gtx 285 with 1 gb memory and up to data items on a tesla c1060 .our gpu bucket sort code appears to be more memory efficient .gpu bucket sort can sort up to data items on a gtx 285 with 2 gb memory and up to data items on a tesla c1060 .therefore , figures [ fig : det - rand - merge-285]a and [ fig : det - rand - merge - tesla]a show the performance comparison with higher resolution for up to and , respectively , while figures [ fig : det - rand - merge-285]b and [ fig : det - rand - merge - tesla]b show the performance comparison for the entire range up to and , respectively .we observe in figures [ fig : det - rand - merge-285]a and [ fig : det - rand - merge - tesla]a that , as reported in , randomized sample sort significantly outperforms thrust merge sort .most importantly , we observe that randomized sample sort and our deterministic sample sort ( gpu bucket sort ) show nearly identical performance on both , the gtx 285 and tesla c1060 .note that , the experiments in used a gtx 285 with 1 gb memory whereas we used a gtx 285 with 2 gb memory .as shown in table [ tab : gpu - performance - characteristics ] , the gtx 285 with 1 gb has a slightly better memory clock rate and memory bandwidth than the gtx 285 with 2 gb which implies that the performance of deterministic sample sort ( gpu bucket sort ) on a gtx 285 is actually a few percent better than the performance of randomized sample sort .the data sets used for the performance comparison in figures [ fig : det - rand - merge-285 ] and [ fig : det - rand - merge - tesla ] were uniformly distributed , random data items .the data distribution does not impact the performance of deterministic sample sort ( gpu bucket sort ) but has an impact on the performance of randomized sample sort .in fact , the uniform data distribution used for figures [ fig : det - rand - merge-285 ] and [ fig : det - rand - merge - tesla ] is a _best case _scenario for randomized sample sort where all bucket sizes are nearly identical .figures [ fig : det - rand - merge-285]b and [ fig : det - rand - merge - tesla]b show the performance of gpu bucket sort for up to and , respectively . for both architectures ,gtx 285 and tesla c1060 , we observe a very close to linear growth rate in the runtime of gpu bucket sort for the entire range of data sizes .this is very encouraging for a problem that requires work . in comparison with randomized samplesort , the linear curves in figures [ fig : det - rand - merge-285]b and [ fig : det - rand - merge - tesla]b show that our gpu bucket sort method maintains a fixed _ sorting rate _ ( number of sorted data items per time unit ) for the entire range of data sizes , whereas it is shown in that the sorting rate for randomized sample sort fluctuates and often starts to decrease for larger values of .in this paper , we presented a _deterministic _ sample sort algorithm for gpus , called gpu bucket sort .our experimental evaluation indicates that gpu bucket sort is considerably faster than thrust merge , the best comparison - based sorting algorithm for gpus , and it is exactly as fast as the new randomized sample sort for gpus when the input data sets used are uniformly distributed , which is a _ best case _ scenario for randomized sample sort .however , as observed in , the performance of randomized sample sort fluctuates with the input data distribution whereas gpu bucket sort does not show such fluctuations .in fact , gpu bucket sort showed a fixed _ sorting rate _ ( number of sorted data items per time unit ) for the entire range of data sizes tested ( up to data items ) .in addition , our gpu bucket sort implementation appears to be more memory efficient because gpu bucket sort is able to sort considerably larger data sets within the same memory limits of the gpus .n. govindaraju , j. gray , r. kumar , and d. manocha . : high performance graphics co - processor sorting for large database management . in _ proc .international conference on management of data ( sigmod ) _ , pages 325 336 , 2006 .n. leischner , v. osipov , and p. sanders .sample sort . in _ proc . intl parallel and distributed processing symposium ( ipdps ) , to appear _ , 2010 ( currently available at http://arxiv1.library.cornell.edu/abs/0909.5649 ) .
|
we present and evaluate gpu bucket sort , a parallel _ deterministic _ sample sort algorithm for many - core gpus . our method is considerably faster than thrust merge ( satish et.al . , proc . ipdps 2009 ) , the best comparison - based sorting algorithm for gpus , and it is as fast as the new _ randomized _ sample sort for gpus by leischner et.al . ( to appear in proc . ipdps 2010 ) . our _ deterministic _ sample sort has the advantage that bucket sizes are guaranteed and therefore its running time does not have the input data dependent fluctuations that can occur for randomized sample sort .
|
[ [ historical - remarks ] ] historical remarks + + + + + + + + + + + + + + + + + + the incompressible case will be discussed first .the lagrangian averaged euler ( lae- ) equations for average incompressible ideal fluid motion first appeared in the context of averaged fluid models in .dissipation was added later to produce the lagrangian averaged navier - stokes ( lans- ) equations , also known as the navier - stokes- equations - dimensional version of the ch equations , also known as the epdiff equations , arise via euler - poincar reduction of geodesics on the group of all diffeomorphisms , and _ not _ the volume preserving ones ( see ) . ] . remarkably , the lae- equations are mathematically identical to the inviscid second grade fluid equations introduced in , except for the fact that the parameter is interpreted differently in the two theories . in the case of lae- and lans- ,the parameter is a spatial scale below which rapid fluctuations are smoothed by linear and nonlinear dispersion . as in , for example, the work of on nonlinear waves , the distinctive feature of the lagrangian averaging approach is that averaging is carried out at the level of the variational principle and not at the level of the euler or navier - stokes equations , which is the traditional averaging or filtering approach used for both the reynolds averaged navier - stokes ( rans ) and the large eddy simulation ( les ) models . as such, the variational procedure does not add any _ artificial _ viscosity , a physical reason to consider the lae- or lans- equations as good models for incompressible turbulent flow .moreover , it has been proven that the models are computationally very attractive ( see ) .although sharing the same general technique ( use of averaging and asymptotic methods in the variational formulation ) , several alternative derivations of incompressible lae- equations exist in the literature .one of these derivations ( see ) uses the generalized lagrangian mean ( glm ) theory developed in .an alternative derivation of the incompressible lae- and lans- equations was given in by using an ensemble average over the set of solutions of the euler equations with initial data in a phase - space ball of radius , while treating the dissipative term via stochastic variations .the derivation also uses a turbulence closure that is based on the lagrangian fluctuations , namely a generalization of the frozen turbulence hypothesis of taylor ( see ) . rigorous analysis aimed at proving global well - posedness and regularity of the three - dimensional isotropic and anisotropic lans- equationscan be found in , for example , .however , global existence for the inviscid three - dimensional lagrangian averaged euler ( lae- ) remains an open problem . from a computational viewpoint , numerical simulations of the models ( see ) show that the lans- equations give comparable computational savings as les models for forced and decaying turbulent flows in periodic domains . for wall - bounded flows , it is expected that either the anisotropic model or a model with varying needs to be used ; the computational efficacy of these methods on such flows remains to be demonstrated .as far as the compressible case is concerned , the only reference we know of is .we shall discuss the relation between the work in this reference and the present paper below .we refer the interested reader to for a more detailed history of the pde analysis for lae- and lans- equations and to for a survey and further references about the numerical aspects of these models .[ [ motivation ] ] motivation + + + + + + + + + + in compressible flows there are two major problems at higher wave numbers , or small scales , that require special attention . these are ( a ) turbulence for high reynolds number flows ( common with incompressible flows ) and ( b ) strong shocks . in both casesthe challenge lies in the appropriate representation of small scale effects . for turbulence, the energy cascade to smaller scales can be balanced by viscous dissipation , resulting in the viscous regularization of the euler equations .historically , viscous dissipation has been used to regularize shock discontinuities .this includes adding to the euler equation _ non - physical and artificial viscous terms _ and fourier s law for heat transfer in the shock region ( see e.g. , ) .this way , the steepening effect of the nonlinear convective term is balanced by dissipation .we believe that lagrangian averaging is a reasonable alternative way to regularize shock waves .the net effect of lagrangian averaging is to add dispersion instead of dissipation to the euler equations ; that is , one adds terms that redistribute energy in a nonlinear fashion . in other , rather different situations ,the technique of balancing a nonlinear convective term by dispersive mechanisms was used by for the kdv equation and by for plasma flows .the competition between nonlinearity and dispersion has of course resulted in remarkable discoveries , the most famous being solitons , localized waves that collide elastically , suffering only a shift in phase .the robustness of solitons in overcoming strong perturbations is largely due to a balance between nonlinearity and linear dispersion .note that in lagrangian averaging , the energy redistribution mechanism that is introduced is nonlinear and might yield other interesting features that warrant further investigation .another feature of the compressible lagrangian averaged navier - stokes- equations ( or clans- equations ) is that in turbulent flows with shocks , the effect of shocks and turbulence are simultaneously modeled by the same technique , namely the lagrangian averaging method .[ [ issues - addressed - in - this - paper ] ] issues addressed in this paper + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this paper we apply the averaged lagrangian methodology to derive the isotropic and anisotropic averaged models for compressible euler equations .one goal of this paper is to present a clear derivation of the averaged equations .we are particularly interested in separating the two issues of averaging and modeling . in the derivation, a new ensemble averaging technique is proposed and investigated .instead of taking clouds of initial conditions , as in , we average over a tube of trajectories centered around a given lagrangian flow . the tube is constructed by specifying the lagrangian fluctuations at and providing a _ flow rule _ which evolves them to all later times .the choice of flow rule is a precise modeling assumption which brings about closure of the system .for the incompressible case we assume that fluctuations are lie advected by the mean flow ( or frozen into the mean flow as divergence - free vector fields ) , and we obtain both the isotropic and the anisotropic versions of the lae- equations .the advection hypothesis is the natural extension to vector fields of the classical frozen turbulence hypothesis of g.i .taylor ( see ) stated for scalar fluctuations .the second goal of this work is to extend the derivation to barotropic compressible flows .this problem has already been considered by holm ( see ) in the context of generalized lagrangian mean ( glm ) motion . in this work, an alpha model appears as a glm fluid theory with an appropriate taylor hypothesis closure .however , even though enumerates several frozen - in closure hypotheses , the averaged equations are derived only for the case when the fluctuations are parallel transported by the mean flow . in our workwe will consider a more general advection hypothesis to study the compressible anisotropic case .in addition , a physically based new flow rule is introduced to deal with the isotropic case .the averaging technique consists of expanding the original lagrangian with respect to a perturbation parameter , truncating the expansion to terms , and then taking the average .it turns out that the averaged compressible lagrangian depends on the lagrangian fluctuations only through three tensor quantities which are quadratic in . in the terminology of tensors represent the second - order statistics of the lagrangian fluctuations .evolution equations for these tensors are derived from a core modeling assumption : a prescribed _ flow rule _ for the time - evolution of the fluctuations .the flow rule gives us closure , allowing us to apply hamilton s principle to the averaged lagrangian and thereby derive an equation for the mean velocity .the organization of the rest of the paper is as follows . in [ averaging - general ] we describe a general procedure for lagrangian ensemble averaging .this procedure is then applied to the action for incompressible fluids in [ averaging - incomp ] to demonstrate our derivation technique .the general procedure is applied again in [ averaging - euler ] , this time to the more complex case of barotropic compressible fluids . [ flow - rules ]is devoted to modeling issues ; here the strategy of modeling the evolution of lagrangian fluctuations using flow rules is discussed in detail . in [ averaged - pdes ]we derive the averaged equations for incompressible and compressible models in both isotropic and anisotropic versions .the appendix provides technical details about the fluctuation calculus used throughout the paper .[ [ main - results ] ] main results + + + + + + + + + + + + the main result of this paper is the derivation of compressible lagrangian - averaged euler equations with * anisotropic modeling of fluid fluctuations see equations ( [ eqn : avg_advection_pde_short ] ) .* isotropic modeling of fluid fluctuations see equations ( [ eqn : avg_comp_pde_isotropic ] ) .in addition , we provide an improved derivation of the _ incompressible _ isotropic and anisotropic lae- equations .a mathematical setting for a certain class of compressible fluid flow problems will first be given .after describing the general procedure for lagrangian averaging , the specific case of the euler action for fluids will be considered .let be an open subset of , representing the containing space of a fluid .suppose we are given a lagrangian for a compressible fluid , , where the space of diffeomorphisms of , and , the space of -forms on .fix a time interval ] into .then the action is we seek an averaged action , where is a length scale characterizing the coarseness of the average . taking and as given , we shall describe how to compute .[ [ remark ] ] remark + + + + + + it is important to emphasize that for both and , is merely a test curve .it is _ not _ an extremal of the action .we are trying to average the action itself , not any fluid dynamical pde or the solutions of such a pde .our final product should not depend at all on an initial choice of the test curve .[ [ tube - initialization ] ] tube initialization + + + + + + + + + + + + + + + + + + + the first step is to take to be a family of diffeomorphisms about the identity .that is , define the vector fields and via use to construct a tube of material deformation maps that are close to by letting , or , written more compactly , here , is a material point in the reference configuration .define the spatial velocity by , where is a given material deformation map .compactly written , this reads the map is a time - dependent vector field on , i.e. for each , and for all , .[ [ averaging ] ] averaging + + + + + + + + + the existence of an averaging operation will now be postulated .the properties this operation is required to satisfy and an example of such an operation will be given shortly .[ [ relationship - between - uepsilon - and - u ] ] relationship between and + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + it is desirable to have the fluctuations centered , on average , about the identity : .what is actually needed is that for , in other words , the -th order fluid fluctuation vector fields should all have mean zero . restricting the map to be centered aboutthe identity means simply that the average will not be skewed in an arbitrary direction . from ( [ eqn : u_mod ] ) and ( [ eqn : xi - centered - meaning ] ) one can derive equation ( [ eqn : u - centered ] ) shows in which sense the average of is in a lagrangian mean theory defined by .this equation is closely connected with the generalized lagrangian - mean description of , where the lagrangian mean velocity and the fluctuating eulerian velocity are related in a similar way .[ [ density ] ] density + + + + + + + for the non - averaged lagrangian , is a parameter in the sense of lagrangian semidirect product theory ; see .the physical interpretation of is as follows .since is an -form on , it can be written as where is a smooth function on .now is the density of the fluid at the material point in the reference configuration .this is in contrast to the spatial density , which gives us the density of the fluid at the spatial point at time . defining has the relationship [ [ fluctuation - calculus ] ] fluctuation calculus + + + + + + + + + + + + + + + + + + + + because and will be expanded , the -derivatives of and need to be calculated .first , define by differentiating ( [ eqn : u_mod ] ) , one finds expressions for and in terms of , , and .the calculations can be performed intrinsically using lie derivative formulae the results , as found in , are [ eqn : u_primes ] , \\\label{eqn : u_prime_prime } u '' & = \partial_t \xi '' + [ u , \xi '' ] - 2 \nabla u ' \cdot \xi ' - \nabla \nabla u ( \xi ' , \xi').\end{aligned}\ ] ] in these formulas , the bracket = \pounds_{x } y ] .assume that there is an averaging operation satisfying the following properties for , , ) ] by one checks formally that this is an example of an averaging operation that satisfies the desired properties .before applying the averaging technique to the case of compressible flow , we shall first derive averaged equations for incompressible flow , equations which have already been derived in the literature .the presentation given here has the advantage of being easily generalized to compressible flows .this advantage stems from the careful use and interpretation of modeling assumptions on the fluctuations only intuitive assumptions are required regarding the mean behavior of the fluctuations as well as a first - order taylor hypothesis . furthermore , great care has been taken to separate the algebraic issues involved with the averaging procedure from the modeling issues . in the incompressible case ,fluid fluctuations are modeled using the _ volume - preserving _ diffeomorphism group on which is denoted by .therefore , the tube construction from the previous section now reads : let be a family of volume - preserving diffeomorphisms about the identity .that is , this forces to be a divergence - free vector field for all . [[ averaged - lagrangian - for - incompressible - fluids ] ] averaged lagrangian for incompressible fluids + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let us start with the standard lagrangian and expand in a taylor series about : substituting this expansion into ( [ eqn : unavg_l_incomp ] ) gives let be the truncation of to terms of order less than . using formulas ( [ eqn : u_primes ] ) , and can be rewritten in terms of , , and .we do this in order to write as a function only of , , and . making the substitutions and rewriting in coordinates , where the notation means . throughout this paper, there is an implied sum over repeated indices .the averaged lagrangian for incompressible flow is now simply .[ [ zero - mean - fluctuations ] ] zero - mean fluctuations + + + + + + + + + + + + + + + + + + + + + + before undertaking this computation , recall from [ averaging - general ] that the fluctuation diffeomorphism maps are required to have as their average the identity map .this statistical assumption regarding the behavior of the fluctuations is the first modeling assumption : this point would not be worth belaboring except that , when combined with the properties of our averaging operation ( [ eqn : axiom1]-[eqn : axiom4 ] ) , assumption ( [ eqn : centered_fluc_model ] ) forces _ all _ linear functions of , , and their derivatives to also have zero mean .applying this fact to ( [ eqn : uli_expan_coords ] ) causes the entire group and the second group ( i.e. the last line of ( [ eqn : uli_expan_coords ] ) ) to vanish inside the average .we continue analyzing ( [ eqn : uli_expan_coords ] ) : the only remaining terms are and the first group . within this group ,we integrate certain terms by parts and notice that all terms involving time - derivatives of group together : where is the material derivative : we then simplify the remaining non - time - derivative terms from ( [ eqn : uli_expan_coords ] ) , integrating by parts to remove second - order spatial derivatives . the final expression for the averaged incompressible lagrangian is \right\}dx.\ ] ] [ [ modeling - of - xi ] ] modeling of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + immediate application of hamilton s principle to ( [ eqn : avg_incomp_aniso_l_expr ] ) does not yield a closed system of equations .namely , we have initial ( ) data for but no way to compute this vector field for .our approach in what follows will be to write down , based on physical considerations , an evolution law , or _flow rule _ , for .a flow rule consists of a prescribed choice of in the following evolution equation for : given a choice of at , this equation will uniquely determine for .let us assume we have a _ linear flow rule _, where is allowed to depend on and but not on or its derivatives .the caveat here is that our choice of must be compatible with incompressibility ; in particular , at , and must be chosen such that remains divergence free as it evolves . at this stage, one might raise the issue of the tube and request a concrete description of the whole object .such a description is unnecessary ; in order to close the system of evolution equations resulting from ( [ eqn : avg_incomp_aniso_l_expr ] ) , we need only describe the evolution of the first - order fluctuation field . now defining the _ lagrangian covariance tensor _ and using the linear flow rule ( [ eqn : flow_rule_linear ] ) , the lagrangian ( [ eqn : avg_incomp_aniso_l_expr ] )can be rewritten as \right\ } \ , dx.\ ] ] here we have used the fact that must be divergence - free . [ [ advection - flow - rule ] ] advection flow rule + + + + + + + + + + + + + + + + + + + the first flow rule we shall consider results from setting : using the definition of the material derivative , it is trivial to see that this flow rule is equivalent to lie advection of : this advection hypothesis is the vector field analogue of the classical frozen turbulence hypothesis of g.i .taylor introduced in .this hypothesis is widely used in the turbulence community ( see for instance for usage of this hypothesis even in the sense of lie advection of vector fields ) .more recently , this generalized version of taylor hypothesis has been used to achieve turbulence closure in the derivation of incompressible lae- equations ( see ) or in the work of holm ( see ) on averaged compressible models using the generalized lagrangian mean ( glm ) theory . the advection flow rule ( [ incompfl1 ] ) is perhaps the most obvious choice for that is compatible with incompressibility .note that if at , then differentiating ( [ incompfl1 ] ) with respect to yields therefore , for all . using this flow rule , both anisotropic and isotropic models shall be developed .for incompressible flow , no other flow rules will be considered .[ [ incompressible - anisotropic - inhomogeneous - flow ] ] incompressible , anisotropic , inhomogeneous flow + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this case , the flow rule is used to derive an evolution equation for the covariance tensor .time - differentiating and using ( [ incompfl1 ] ) yields the lie advection equation .equipped with an evolution equation for , we can apply hamilton s principle to ( [ eqn : avg_incomp_l_expr_2 ] ) and derive a closed system with unknowns , the average velocity , and , the covariance tensor . carrying this out, one finds that the anisotropic lae- equations are given by the following coupled system of equations for and : [ eqn : lae_in_anis ] where is the fluid pressure , and the operator is defined by .\ ] ] when , the system ( [ eqn : lae_in_anis_1]-[eqn : lae_in_anis_2 ] ) reduces to the incompressible euler equation .[ [ note ] ] note + + + + start with the generic incompressible averaged lagrangian ( [ eqn : avg_incomp_l_expr_2 ] ) and substitute the advection flow rule ( [ incompfl1 ] ) .now integrate the last term by parts and use .the result is \right\}dx,\ ] ] which is exactly the lagrangian used in to derive the anisotropic lae- equations .however , in the second - order taylor hypothesis where the orthogonality is taken in , is necessary to achieve closure .our choice of modeling assumptions rendered unnecessary any such hypothesis on the second - order fluctuations .second - order taylor hypotheses , unlike the first - order hypothesis retained from , do not have much precedent in the turbulence literature , as discussed above .[ [ incompressible - isotropic - homogeneous - fluids ] ] incompressible , isotropic , homogeneous fluids + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to model the motion of an approximately isotropic fluid , we take the covariance tensor to be the identity matrix , i.e. the choice of is a modeling assumption , and will thus only be valid for flows which almost preserve this property .note that ( [ eqn : isotropy ] ) is strictly inconsistent with the advection flow rule , and thus can only be regarded as an approximation . for the case ofincompressible isotropic mean flow , we assume that ( [ eqn : isotropy ] ) holds ; then differentiating this equation with respect to and and using the fact that is divergence - free , we have hence and the lagrangian ( [ eqn : avg_incomp_aniso_l_expr ] ) simplifies to we emphasize that this is only an approximation , so that along fluid trajectories for which the covariance tensor is approximately the identity . now using the flow rule given by ( [ incompfl1 ] ), the averaged lagrangian from ( [ eqn : avg_incomp_iso_l_expr ] ) becomes where we have used the isotropy assumption ( [ eqn : isotropy ] ) .hence , ( [ eqn : avg_incomp_iso_l_expr ] ) becomes this expression for the averaged lagrangian in the isotropic case is identical to the one derived in . now applying either hamilton s principle or euler - poincar theory, we obtain the standard isotropic lae- equations : [ eqn : lae_in_iso ] where is the usual fluid pressure .having understood the incompressible case , we now turn to the compressible case . the procedure is identical in all aspects except we must now keep track of density fluctuations .start with the reduced lagrangian for compressible flow : the fluid is assumed to be barotropic , meaning that , the potential energy , is a function only of , the fluid density .now expand the velocity and density in taylor series and also expand the potential energy : substituting these expansions into the reduced lagrangian gives \\ & \ , + \epsilon^2 \left [ \frac{1}{2 } \left ( ( \|u'\|^2 + u '' \cdot u ) - ( w''(\rho ) { \rho'}^2 + w'(\rho ) \rho '' ) \right ) \rho \right .\\ & \ , \phantom{+\epsilon^2\biggl [ } + \left .( u \cdot u ' - w'(\rho ) \rho ' ) \rho ' + \frac{1}{2 } \left ( \frac{1}{2 } \|u\|^2 - w(\rho ) \right ) \rho '' \right ] + \mathcal{o}(\epsilon^3 ) \ dx . \end{split}\ ] ] this expansion is now truncated , leaving out all terms of order and higher . denote the truncated lagrangian by , and define the averaged lagrangian by we now outline the procedure by which we arrive at a final written expression for the averaged lagrangian .the algebra is straightforward but tedious , so details will be omitted .1 . use equations ( [ eqn : u_primes ] ) and ( [ eqn : rho_primes ] ) to rewrite ( [ eqn : unavg_expan_l_expr ] ) in terms of only , , and the fluctuations , .2 . remove two kinds of terms that vanish inside the average : 1 .linear functions of or , 2 .linear functions of derivatives ( either spatial or temporal ) of or . + note : see `` zero - mean fluctuations '' in [ averaging - incomp ] for justification .3 . carry out the averaging operation .as in the incompressible case , the only quantities left inside the average should be nonlinear functions of .the end result for the averaged lagrangian for compressible flow is \right\ } dx . \end{split}\ ] ] we have introduced , the enthalpy satisfying , where is pressure , is called enthalpy .our definition of implies as required .] , defined by deriving the expressions ( [ eqn : avg_comp_l_expr ] ) and ( [ eqn : avg_incomp_aniso_l_expr ] ) for the averaged lagrangians , no assumptions were made regarding how the lagrangian fluctuations evolve . in this sectionwe describe one possible strategy for modeling .note that such a strategy is necessary to achieve closure for the evolution equations associated with the lagrangians ( [ eqn : avg_comp_l_expr ] ) or ( [ eqn : avg_incomp_aniso_l_expr ] ) .[ [ preliminary - observation ] ] preliminary observation + + + + + + + + + + + + + + + + + + + + + + + assuming evolves via a linear flow rule , as in ( [ eqn : flow_rule_linear ] ) , the vector field appears in the averaged lagrangian ( [ eqn : avg_comp_l_expr ] ) only as part of the following three expressions : [ eqn : tensors ] note that is the same _ lagrangian covariance tensor _ from the incompressible derivation . in terms of these quantities ,the averaged compressible lagrangian is given in coordinates by \right\ } dx . \end{split}\ ] ] time - differentiating ( [ eqn : tensors_1]-[eqn : tensors_3 ] ) and using the linear flow rule ( [ eqn : flow_rule_linear ] ) results in evolution equations for , , and : [ eqn : flow_tensors ] [ [ flow - rules ] ] flow rules + + + + + + + + + + for compressible flows , two flow rules will be considered .we define them first , and then go on to consider their relative merits and demerits : 1 .advection : [ fl1 ] 2 .rotation : [ fl2 ] [ [ advection ] ] advection + + + + + + + + + for our anisotropic model , we shall advect and treat the quantities , , and as parameters in the final system , each of which will have its own evolution equation . substituting into the system ( [ eqn : flow_tensors ] ) gives [ eqn : flow_tensors_fl1 ] one advantage of the advection flow rule is that it automatically closes the system ( [ eqn : flow_tensors ] ) . for a general choice of , the system involves and , which can not be expressed solely in terms of , , and .[ [ rotation ] ] rotation + + + + + + + + for our isotropic model , we want to know whether the evolution equation ( [ eqn : df_dt_gen ] ) for preserves the isotropy relationship .suppose at .then substituting into ( [ eqn : df_dt_gen ] ) reveals that if is antisymmetric , we have , and solves ( [ eqn : df_dt_gen ] ) for all .we wish to know whether this solution is unique .this is guaranteed by a straightforward generalization of the results concerning linear hyperbolic systems of first - order equations from , assuming sufficient smoothness of .we conclude that antisymmetry of is sufficient to guarantee that the initial data is in fact preserved for all .then an immediate choice of a tensor that is antisymmetric is given by the rotation flow rule ( [ fl2 ] ) .this form has a very attractive physical interpretation .putting the linear flow rule equation ( [ eqn : flow_rule_linear ] ) together with ( [ fl2 ] ) gives us where is the vorticity vector .the last equation can be interpreted in the sense that fluctuations are rigidly transported by the mean flow , with a local angular velocity given by the vorticity vector .finally , the rotation flow rule ( [ fl2 ] ) does not by itself close the system ( [ eqn : flow_tensors ] ) .when using this flow rule , we shall assume that and .here we shall write down two systems of coupled pdes which describe the evolution of the average velocity and density in a compressible flow . each pde is derived from an associated averaged lagrangian .[ [ compressible - fully - anisotropic - inhomogeneous - fluids ] ] compressible , fully anisotropic , inhomogeneous fluids + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + by substituting ( [ fl1 ] ) into the lagrangian ( [ eqn : avg_comp_l_expr ] ) , we obtain closure : the lagrangian no longer depends explicitly on , but instead on the tensors , , and , for which a self - contained system of evolution equations ( [ eqn : flow_tensors_fl1 ] ) has already been derived see [ flow - rules ] for details . applying hamilton s principle directly to ( [ eqn : avg_comp_l_expr_2 ] ) yields an evolution equation for , the average fluid velocity .we write this equation using the operator , which is defined as we also write where means as usual .the anisotropic compressible lae- equations are : [ eqn : avg_advection_pde_short ] \biggr\}\end{gathered}\ ] ] [ [ well - posedness ] ] well - posedness + + + + + + + + + + + + + + we now sketch a rough well - posedness argument for the system ( [ eqn : avg_advection_pde_short ] ) .assume that the tensor is positive - definite . by thisit is meant , since is a tensor , that for any one - form , the contraction is positive everywhere .given the -weighted inner product , we have . since is a positive definite linear operator , has trivial kernel and we expect that ( [ eqn : avg_advection_pde_short ] ) is well - posed .it would be of analytical interest to see to what extent the `` geodesic part '' of these equations define a smooth spray in the sense of , and which holds for the epdiff equations ( that is , the -dimensional ch equations ) , as explained in . [ [ compressible - isotropic - inhomogeneous ] ] compressible , isotropic , inhomogeneous + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for this case we use flow rule ( [ fl2 ] ) , which can be written in vector notation as recall that this flow rule is compatible with an isotropic choice of the covariance tensor , i.e. .we further assume that and for some constant .using flow rule ( [ fl2 ] ) along with these extra assumptions in the general lagrangian expression ( [ eqn : avg_comp_l_expr_2 ] ) gives us a lagrangian in only two variables : \right ) \ d^n \ !x,\end{gathered}\ ] ] where is the enthalpy introduced in ( [ eqn : enthalpy_def ] ) . regarding this as a lagrangian in and ,one uses the semidirect product euler - poincar equations ( see ) to derive the system [ eqn : avg_comp_pde_isotropic ] with the modified momentum and modified pressure given by here are explicit coordinate expressions for two slightly complicated objects : the following convention for divergences of tensors has been used : given a 2-tensor , we set that is , the contraction implicit in the divergence operation always takes place on the _ first _ index . [ [ observations ] ] observations + + + + + + + + + + + + * in the case of homogeneous incompressible flow , where is constant and , the definition of in ( [ eqn : v_def ] ) reduces to which after rescaling to get rid of the factor of is precisely the one finds in treatments of the incompressible lae- and lans- equations . * the above does not work in one spatial dimension .the problem is that here reduces to , which clearly does not describe transport at all . for a 1-d isotropic model one may very well want to forget about antisymmetry of and instead use something such as the advection flow rule .one may , quite reasonably , conclude that the only meaning of isotropy in 1-d should be reflection symmetry .[ [ the - initialization - problem ] ] the initialization problem + + + + + + + + + + + + + + + + + + + + + + + + + + perhaps the largest unsolved problem for the lagrangian averaged equations is the initialization problem .a concise statement of the problem reads : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ given initial data for the euler equation , how does one obtain initial data for the lae- equation ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let us look at this problem in slightly more detail .let denote the solution of the incompressible euler equations for initial data , i.e. .similarly , let denote the solution of the incompressible , isotropic lae- equations ( [ eqn : lae_in_iso ] ) for initial data .now should be , in some sense , the mean flow of the fluid .this means that should be the mean flow of the fluid at time , implying that should be , in some sense , an `` averaged '' or `` filtered '' version of .the question is : how does one derive from ?another way of phrasing this question is : how do we describe ( approximately ) the initial state of the fluid ( given exactly , for our purposes , by the field ) using only the mean flow variable ?numerous methods have been used to initialize the lae- equations for use in numerical simulations , but none of these methods has any theoretical foundation .there is also no theory regarding how one should filter a full euler flow , or even a family of flows , in order to obtain a mean flow that could be compared with the full lae- trajectory . in this respect ,equation ( [ eqn : u - centered ] ) , which states that is not helpful : we have no way to compute the fluctuation diffeomorphism group .therefore we have no way to compute the left - hand side .the difficulty can be summarized in the following commutative diagram . here is the standard euler action and is the lagrangian - averaged action .\ar[d ] & s^{\alpha } \ar^{\txt{\footnotesize derive pde , \\\footnotesize solve numerically}}[d ] \\u \ar@{-->}_{\txt{\footnotesize the missing link}}[r ] & u } \ ] ] solid arrows represent steps that we know how to carry out . the dashed arrow represents the one step that we do not know how to carry out . our strategy for this problem will be to develop methods by which we can test different filters for obtaining from in practice .[ [ treatment - of - densities ] ] treatment of densities + + + + + + + + + + + + + + + + + + + + + + another area for further investigation involves our treatment of the density tube .there are two questions to ground us : 1 .we have tacitly assumed that at , and for all , all , an argument similar to the one made above in our discussion of the initialization problem can be made here .namely , represents the mean density at time .meanwhile , represents the true density of the fluid at time .these two quantities need not be equal .this prompts the question : how would we carry out the procedure from sections [ averaging - general ] and [ averaging - euler ] with tubes in which each trajectory does _ not _ have the same initial density ?2 . as our derivation of the averaged compressible equations stand , we have derived the fact that the `` mean '' density was advected by the mean flow : . substituting and using the definition of divergence yields the standard continuity equation in both rans and les treatments of averaged/ filtered flow , the mean flow satisfies a _ modified _ continuity equation rather than the standard one . therefore : why does the lagrangian averaged mean density satisfy the usual continuity equation ?the two questions regarding densities are in fact related . to see this ,let us suppose that given the initial density associated with the center line of our tube , we have a method for constructing a family of initial densities for each of the other curves in the tube . now defining ] will find that satisfies a modified continuity equation to close this equation , we must either carry out the average directly , or we must expand about a suitable trajectory and make modeling assumptions .[ [ filtered - lagrangians ] ] filtered lagrangians + + + + + + + + + + + + + + + + + + + + we have seen that the current averaging procedure leads to complicated averaged equations .furthermore , there is no clear way to evaluate numerically the flow rules we have proposed on physical grounds .one of our immediate goals is to investigate a filtering approach , still at the level of the lagrangian , which will lead to simpler averaged models that can be tested numerically .the filtering approach we have in mind begins with a decomposition of the velocity field into mean and fluctuating components .this would replace the taylor expansion ( [ eqn : u_and_rho_expan ] ) of and that we carried out in the present work , and would therefore lead to lagrangians and equations with much less algebraic complexity . as opposed to the axiomatic averaging operation , the filter shall be specified concretely .we expect this to help greatly with the initialization and density problems discussed above ; furthermore , the filtering approach leads naturally to questions about the relationship between les and lae- models .[ [ simpler - models ] ] simpler models + + + + + + + + + + + + + + as we previously noted , the flow rule approach developed in this paper does not yield a one - dimensional compressible averaged model .we are currently investigating such a model , derived from the filtered lagrangian where . to derive this lagrangian, we filter only the velocity , leaving density and potential energy alone .this is the compressible analogue of the filtered lagrangian used in deriving the camassa - holm equation .the analysis and numerical simulation of the new equations presented in section [ averaged - pdes ] of this work will be difficult .much easier is the analysis of the pde associated with ( [ eqn : one_d_filter ] ) . in particular, we expect that numerical studies of this one - dimensional model will yield insight into the dynamics of the higher - dimensional equations .[ [ entropy ] ] entropy + + + + + + + in the derivation of our compressible averaged models , we have made the barotropic assumption .we expect the resulting barotropic model to be useful in computing mean flow quantities in regimes where we are not concerned with strong physical shocks , for example in climate models. the next major step forward will be to remove the barotropic assumption , and derive a model that is valid in regimes where we are concerned with shocks .to this end , we have derived an averaged model for the general case , where the potential energy has the form , where is the entropy .this model , which consists of a system of equations for , , and , also involves the pressure .therefore , in order to close the system , we require an equation of state relating to and .the open question now is as follows : given an equation of state for the compressible euler system , what is the equation of state relating the averaged variables to one another ?in other words , how does lagrangian averaging interact with the thermodynamics of the system ?we hope that analyzing a finite - dimensional case of this interaction will shed light on this issue . [ [ connections - with - kevrekidis - coarsefine - methods ] ] connections with kevrekidis coarse / fine methods + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a description of any mechanical system , not necessarily involving fluids , in the form of a lagrangian , we can carry out the procedure described in [ averaging - general ] to find an averaged lagrangian . from thiswe can derive equations of motion for the average dynamics of the original system .changing our language slightly , we say that we have a general method for extracting the `` coarse '' dynamics of a mechanical system whose full description involves motions on both fine and coarse scales .another method for computing the coarse - scale dynamics of a mechanical system has been put forth by in .kevrekidis method does not involve trying to write down equations of motion which govern the coarse dynamics .instead , he offers an algorithmic approach , the crux of which is as follows .the coarse dynamics of a system are found by _ lifting _ the initial ( ) state to an ensemble of initial states , _ integrating _ each using the full equations until some small final time has been reached , and _ projecting _ the resulting states onto a single state .this state is then _ extrapolated _ to a state at some desired . by iterating this process and tuning the lifting , projection , and extrapolation operations, this method can be used to recover the coarse dynamics of the system .now the question that begs to be asked is as follows : for the case of fluid dynamics , how different are the coarse dynamics provided by the lans- equation from the coarse dynamics one would obtain by following kevrekidis ?the difficulty in answering this question lies in implementing a full fine - scale integrator for fluids that one could successfully embed inside kevrekidis coarse - scale algorithm .we look forward to tackling this task soon .we extend our sincerest thanks to steve shkoller , darryl holm and marcel oliver for helpful discussions and criticism on a wide array of issues central to this paper .the research in this paper was partially supported by afosr contract f49620 - 02 - 1 - 0176 .harish s. bhat thanks the national science foundation for supporting him with a graduate research fellowship .before proceeding with any derivations , we state the lie derivative theorem for time - dependent vector fields : if the vector field has flow , then our task now is to derive equations ( [ eqn : rho_primes ] ) . starting with ( [ eqn : rho_mod ] ) ,let us move to the right - hand side of the equation : the strategy is to differentiate with respect to and use the lie derivative theorem ( [ eqn : lie_deriv_thm ] ) . the intrinsic definition of divergence and the canonical volume form will both be used in what follows . notethat with no subscript on the means . before applying the lie derivative theorem , note that the vector field has flow simple computation yields then we start with : next we compute : foias , c. , d. d. holm , and e. s. titi [ 2002 ] , the three dimensional viscous camassa - holm equations and their relation to the navier - stokes equations and turbulence theory , _ dyn . anddiff . eqns . _* 14 * , 136 .kevrekidis , i. g. , c. w. gear , j. m. hyman , p. g. kevrekidis , o. runborg , c. theodoropoulos [ 2002 ] , equation - free , coarse - grained multiscale computation : enabling microscopic simulators to perform system - level analysis , _ communications in the mathematical sciences _ * 1 * , 715762 .mohseni , k. , b. kosovi , s. shkoller , and j. e. marsden [ 2003 ] , numerical simulations of the lagrangian averaged navier - stokes equations for homogeneous isotropic turbulence , _ physics of fluids _ * 15 * , 524544 .
|
this paper extends the derivation of the lagrangian averaged euler ( lae- ) equations to the case of barotropic compressible flows . the aim of lagrangian averaging is to regularize the compressible euler equations by adding dispersion instead of artificial viscosity . along the way , the derivation of the isotropic and anisotropic lae- equations is simplified and clarified . the derivation in this paper involves averaging over a tube of trajectories centered around a given lagrangian flow . with this tube framework , the lagrangian averaged euler ( lae- ) equations are derived by following a simple procedure : start with a given action , taylor expand in terms of small - scale fluid fluctuations , truncate , average , and then model those terms that are nonlinear functions of . closure of the equations is provided through the use of _ flow rules _ , which prescribe the evolution of the fluctuations along the mean flow . averaged lagrangians , inviscid compressible fluids . 37k99 37n10 76m30 76nxx
|
sparse - group lasso ( sgl ) is a powerful regression technique in identifying and features simultaneously . to yield sparsity at both group and individual feature, sgl combines the lasso and group lasso penalties . in recent years, sgl has found great success in a wide range of applications , including but not limited to machine learning , signal processing , bioinformatics etc .many research efforts have been devoted to developing efficient solvers for sgl .however , when the feature dimension is extremely high , the complexity of the sgl regularizers imposes great computational challenges .therefore , there is an increasingly urgent need for nontraditional techniques to address the challenges posed by the massive volume of the data sources .recently , el ghaoui et al . proposed a promising feature reduction method , called _ safe screening _ , to screen out the so - called _ inactive _ features , which have zero coefficients in the solution , from the optimization .thus , the size of the data matrix needed for the training phase can be reduced , which may lead to substantial improvement in the efficiency of solving sparse models . by safe ,various exact and heuristic feature screening methods have been proposed for many sparse models such as lasso , group lasso , etc .it is worthwhile to mention that the discarded features by exact feature screening methods such as safe , dome and edpp are guaranteed to have zero coefficients in the solution .however , heuristic feature screening methods like strong rule may mistakenly discard features which have coefficients in the solution .more recently , the idea of exact feature screening has been extended to exact sample screening , which screens out the nonsupport vectors in svm and lad . as a promising data reduction tool, exact feature / sample screening would be of great practical importance because they can effectively reduce the data size without sacrificing the optimality .however , all of the existing feature / sample screening methods are only applicable for the sparse models with one sparsity - inducing regularizer . in this paper , we propose an exact two - layer feature screening method , called tlfre , for the sgl problem .the two - layer reduction is able to quickly identify the inactive groups and the inactive features , respectively , which are guaranteed to have zero coefficients in the solution . to the best of our knowledge, tlfre is the first screening method which is capable of dealing with multiple sparsity - inducing regularizers .we note that most of the existing exact feature screening methods involve an estimation of the dual optimal solution .the difficulty in developing screening methods for sparse models with multiple sparsity - inducing regularizers like sgl is that the dual feasible set is the sum of simple convex sets .thus , to determine the feasibility of a given point , we need to know if it is decomposable with respect to the summands , which is itself a nontrivial problem ( see section [ section : basics ] ) .one of our major contributions is that we derive an elegant decomposition method of any dual feasible solutions of sgl via the framework of fenchel s duality ( see section [ section : fenchel_dual_sgl ] ) .based on the fenchel s dual problem of sgl , we motivate tlfre by an in - depth exploration of its geometric properties and the optimality conditions in section [ section : screening_rules_sgl ] .we derive the set of the regularization parameter values corresponding to zero solutions . to develop tlfre, we need to estimate the upper bounds involving the dual optimal solution . to this end, we first give an accurate estimation of the dual optimal solution via the normal cones .then , we formulate the estimation of the upper bounds via nonconvex optimization problems .we show that these nonconvex problems admit closed form solutions .the rest of this paper is organized as follows . in section [ section : basics ] , we briefly review some basics of the sgl problem .we then derive the fenchel s dual of sgl with nice geometric properties under the elegant framework of fenchel s duality in section [ section : fenchel_dual_sgl ] . in section [ section :screening_rules_sgl ] , we develop the tlfre screening rule for sgl . to demonstrate the flexibility of the proposed framework , we extend tlfre to the nonnegative lasso problem in section [ section : nnlasso ] .experiments in section [ section : experiments ] on both synthetic and real data sets demonstrate that the speedup gained by the proposed screening rules in solving sgl and nonnegative lasso can be orders of magnitude .* notation * : let , and be the , and norms , respectively. denote by , , and the unit , , and norm balls in ( we omit the superscript if it is clear from the context ) .for a set , let be its interior .if is closed and convex , we define the projection operator as .we denote by the indicator function of , which is on and elsewhere .let be the class of proper closed convex functions on .for , let be its subdifferential .the domain of is the set . for ,let ] be primal and dual values defined , respectively , by the fenchel problems : one has . if , furthermore , and satisfy the condition , then the equality holds , i.e. , , and the supreme is attained in the dual problem if finite .we omit the proof of theorem [ thm : fenchel_duality ] since it is a slight modification of theorem 3.3.5 in .let , and be the second term in ( [ prob : sgl ] ) .then , sgl can be written as to derive the fenchel s dual problem of sgl , theorem [ thm : fenchel_duality ] implies that we need to find and .it is well - known that . therefore , we only need to find , where the concept _ infimal convolution _ is needed : [ def : inf_conv ] let .the infimal convolution of and is defined by and it is exact at a point if there exists a such that is exact if it is exact at every point of its domain , in which case it is denoted by . with the infimal convolution , we derive the conjugate function of in lemma [ lemma : conjugate_omega_sgl ] .[ lemma : conjugate_omega_sgl ] let , and .moreover , let , .then , the following hold : 1 . , 2 . where is the sub - vector of corresponding to the group .to prove lemma [ lemma : conjugate_omega_sgl ] , we first cite the following technical result .[ thm : infconv_sum ] let .suppose there is a point in at which is continuous .then , for all : . \end{aligned}\ ] ] we now give the proof of lemma [ lemma : conjugate_omega_sgl ] . the first part can be derived directly by the definition as follows : to show the second part , theorem [ thm : infconv_sum ] indicates that we only need to show is exact ( note that and are continuous everywhere ) .let us now compute . to solve the optimization problem in ( [ eqn : infconv_sgl ] ) , i.e. , we can consider the following problem we can see that the optimal solution of problem ( [ prob : projection_sgl ] ) must also be an optimal solution of problem ( [ prob : infconv_sgl ] ) .let be the optimal solution of ( [ prob : projection_sgl ] ) .we can see that is indeed the projection of on , which admits a closed form solution : =[\mathbf{p}_{\mathcal{b}_{\infty}}(\xi_g)]_i= \begin{cases } 1,\hspace{7.5mm}{\rm if}\hspace{1mm}[\xi_g]_i>1,\\ [ \xi_g]_i,\hspace{3mm}{\rm if}\hspace{1mm}|[\xi_g]_i|\leq1,\\ -1,\hspace{5mm}{\rm if}\hspace{1mm}[\xi_g]_i<-1 .\end{cases } \end{aligned}\ ] ] thus , problem ( [ prob : infconv_sgl ] ) can be solved as hence , the infimal convolution in eq .( [ eqn : infconv_sgl ] ) is exact and theorem [ thm : infconv_sum ] leads to which completes the proof .note that admits a closed form solution , i.e. , =\textup{sgn}\left([\xi_g]_i\right)\min\left(\left|[\xi_g]_i\right|,1\right) ] satisfy the inequality equality holds if and only if .we now give the proof of theorem [ thm : dual_sgl ] .we first show the first part . combining theorem [ thm : fenchel_duality ] and lemma [ lemma : conjugate_omega_sgl ], the fenchel s dual of sgl can be written as : which is equivalent to problem ( [ prob : dual_sgl_fenchel ] ) . to show the second half , we have the following inequalities by fenchel - young inequality : we sum the inequalities in ( [ ineqn : fy_f_sgl ] ) and ( [ ineqn : fy_omega_sgl ] ) together and get clearly , the left and right hand sides of inequality ( [ ineqn : weak_duality_sgl ] ) are the objective functions of the pair of fenchel s problems . because and , we have thus ,the equality in ( [ ineqn : weak_duality_sgl ] ) holds at and , i.e. , therefore , the equality holds in both ( [ ineqn : fy_f_sgl ] ) and ( [ ineqn : fy_omega_sgl ] ) at and . by applying lemma [ lemma : fy_inequality ] again , we have which completes the proof .( [ eqn : kkt1_sgl ] ) and eq .( [ eqn : kkt2_sgl ] ) are the so - called kkt conditions and can also be obtained by the lagrangian multiplier method see [ subsection : lagrangian_supplement ] in the supplement .[ remark : shrinkage_feasible_sgl ] we note that the shrinkage operator can also be expressed by therefore , problem ( [ prob : dual_sgl_fenchel ] ) can be written more compactly as * the equivalence between the dual formulations * for the sgl problem , its lagrangian dual in ( [ prob : dual_sgl_lagrangian ] ) and fenchel s dual in ( [ prob : dual_sgl_fenchel ] ) are indeed equivalent to each other .we bridge them together by the following lemma .[ lemma : infconv_sets ] let and be nonempty subsets of .then . in view of lemmas [ lemma : conjugate_omega_sgl ] and[ lemma : infconv_sets ] , and recall that , we have combining eq .( [ eqn : conjugate_omega2_sgl ] ) and theorem [ thm : fenchel_duality ] , we obtain the dual formulation of sgl in ( [ prob : dual_sgl_lagrangian ] ) .therefore , the dual formulations of sgl in ( [ prob : dual_sgl_lagrangian ] ) and ( [ prob : dual_sgl_fenchel ] ) are the same .an appealing advantage of the fenchel s dual in ( [ prob : dual_sgl_fenchel ] ) is that we have a natural decomposition of all points : with and . as a result, this leads to a convenient way to determine the feasibility of any dual variable by checking if , .we motive the two - layer screening rules via the kkt condition in eq .( [ eqn : kkt2_sgl ] ) .as implied by the name , there are two layers in our method .the first layer aims to identify the inactive groups , and the second layer is designed to detect the inactive features for the remaining groups . by eq .( [ eqn : kkt2_sgl ] ) , we have the following cases by noting and * case 1 . *if , we have \in \begin{cases } \alpha\sqrt{n_g}\frac{[\beta_g^*(\lambda,\alpha)]_i}{\|\beta^*_g(\lambda,\alpha)\|}+\textup{sign}([\beta_g^*(\lambda,\alpha)]_i ) , \hspace{2mm}\textup{if}\hspace{1mm}[\beta_g^*(\lambda,\alpha)]_i\neq0,\\ [ -1,1],\hspace{46.5mm}\textup{if}\hspace{1mm}[\beta_g^*(\lambda,\alpha)]_i=0 . \end{cases}\end{aligned}\ ] ] in view of eq .( [ eqn : kkt1_nonzero_sgl ] ) , we can see that \right|\leq 1\hspace{1mm}\textup{then}\hspace{1 mm } [ \beta^*_g(\lambda,\alpha)]_i=0.\end{aligned}\ ] ] * case 2 . *if , we have \in\alpha\sqrt{n_g}[\mathbf{u}_g]_i+[-1,1],\,\|\mathbf{u}_g\|\leq1.\end{aligned}\ ] ] * the first layer ( group - level ) of tlfre * from ( [ eqn : c1_a_sgl ] ) in * case 1 * , we have clearly , ( [ rule1_sgl ] ) can be used to identify the inactive groups and thus a group - level screening rule . * the second layer ( feature - level ) of tlfre *let be the column of .we have =\mathbf{x}_{g_i}^t\theta^*(\lambda,\alpha) ] be the vector consisting of the first components of .[ lemma : rho ] we sort in descending order and denote it by . 1 .if there exists ] , then ] , , and .there exists a such that , and {k+1},[\mathbf{z}]_k) ] and <-1\} ] .then , {i^*})\mathbf{e}_{i^*}:\,i^*\in\mathcal{i}^*\right\},\hspace{1.5mm}\textup{if}\hspace{1mm}\xi_g\not\subset\mathcal{b}_{\infty}\,\textup{and}\,\mathbf{c}\neq0,\\ \left\{r\cdot\mathbf{e}_{i^ * } , -r\cdot\mathbf{e}_{i^*}:\,i^*\in\mathcal{i}^*\right\},\hspace{8.5mm}\textup{if}\hspace{1mm}\xi_g\not\subset\mathcal{b}_{\infty}\,\textup{and}\,\mathbf{c}=0,\\ \end{cases } \end{aligned}\ ] ] where is the standard basis vector . 1 .suppose that . by the third part of lemma [ lemma : optimality_condition_supreme1_sgl ], we have by eq .( [ eqn : opt_cond_case1_sgl ] ) , we can see that because otherwise we would have .moreover , we can only consider the cases with because and we aim to maximize .therefore , if we can find a solution with , there is no need to consider the cases with .+ suppose that . then , eq . ( [ eqn : opt_cond_case1_sgl ] ) leads to in view of part ( iv ) of proposition [ prop : normal_cone ] and eq .( [ eqn : c_supreme1_sgl ] ) , we have therefore , eq . ( [ eqn : xi_supreme1_sgl ] ) can be rewritten as combining eq .( [ eqn : opt_cond_case1_sgl ] ) and eq .( [ eqn : shrink_xi_c_supreme1_sgl ] ) , we have the statement holds by plugging eq .( [ eqn : mu_supreme1_sgl ] ) and eq .( [ eqn : projc_projxi_supreme1_sgl ] ) into eq .( [ eqn : xi_supreme1_sgl ] ) and eq .( [ eqn : shrink_xi_c_supreme1_sgl ] ) .moreover , the above discussion implies that only contains one element as shown in eq .( [ eqn : sol1_supreme1_sgl ] ) .2 . suppose that is a boundary point of .then , we can find a point and . by the third part of lemma [ lemma : optimality_condition_supreme1_sgl ], we also have eq .( [ eqn : opt_sol_bdy_sgl ] ) and eq .( [ eqn : opt_cond_case1_sgl ] ) hold .we claim that ] .+ let us consider the cases with . because [ see eq .( [ eqn : opt_cond_case1_sgl ] ) ] and we want to maximize , there is no need to consider the cases with if we can find solutions of problem ( [ prob : supreme1_sgl ] ) with .therefore , eq . ( [ eqn : opt_cond_case1_sgl ] ) leads to by part ( iii ) of proposition [ prop : normal_cone ] , we can see that combining eq .( [ eqn : pxi_c_supreme1_sgl ] ) and eq .( [ eqn : opt_sol_bdy_sgl ] ) , the statement holds immediately , which confirms that .3 . suppose that is an interior point of first consider the cases with .then , we can see that in other words , an arbitrary point of is an optimal solution of problem ( [ prob : supreme1_sgl ] ) .thus , we have on the other hand , we can see that therefore , we have and thus 2 .suppose that , i.e. , there exists such that . by the third part of lemma [ lemma : optimality_condition_supreme1_sgl ] , we have eq . ( [ eqn : opt_sol_bdy_sgl ] ) and eq .( [ eqn : opt_cond_case1_sgl ] ) hold .moreover , in view of the proof of the first and second part , we can see that .therefore , eq . ( [ eqn : opt_cond_case1_sgl ] ) leads to by rearranging the terms of eq .( [ eqn : conv_comb_supreme1_sgl ] ) , we have because , eq .( [ eqn : conv_comb_supreme1_sgl ] ) implies that lies on the line segment connecting and .thus , we have therefore , to maximize , we need to minimize . because , we can see that is a boundary point of .therefore , we need to solve the following minimization problem : suppose that .we can see that the set of optimal solutions of problem ( [ prob : proj_faces_supreme1_sgl ] ) is for each , we set it as . in view of eq .( [ eqn : conv_comb2_supreme1_sgl ] ) and eq .( [ eqn : opt_sol_bdy_sgl ] ) , the statement follows immediately .+ suppose that .recall that {i^*}|=\|\mathbf{c}\|_{\infty}\} ] if ( [ rule : l1 ] ) and ( [ rule : l2 ] ) are the first layer and second layer screening rules of tlfre , respectively .the framework of tlfre is applicable to a large class of sparse models with multiple regularizers . as an example, we extend tlfre to nonnegative lasso : where is the regularization parameter and is the nonnegative orthant of . in section [ ssec : fenchel_dual_nnlasso ] , we transform the constraint to a regularizer and derive the fenchel s dual of the nonnegative lasso problem .we then motivate the screening method called dpc since the key step is to * * d**ecom**p**ose a * * c**onvex set via fenchel s duality theorem via the kkt conditions in section [ ssec : general_rule_nnlasso ] . in section [ ssec : lambdamx_nnlasso ] , we analyze the geometric properties of the dual problem and derive the set of parameter values leading to zero solutions .we then develop the screening method for nonnegative lasso in section [ ssec : dpc_nnlasso ] .let be the indicator function of . by noting that for any , we can rewrite the nonnegative lasso problem in ( [ prob : nnlassoo ] ) as in other words , we incorporate the constraint to the objective function as an additional regularizer . as a result ,the nonnegative lasso problem in ( [ prob : nnlasso ] ) has two regularizers .thus , similar to sgl , we can derive the fenchel s dual of nonnegative lasso via theorem [ thm : fenchel_duality ] .we now proceed by following a similar procedure as the one in section [ subsection : fenchel_dual_sgl ] .we note that the nonnegative lasso problem in ( [ prob : nnlasso ] ) can also be formulated as the one in ( [ prob : general ] ) with and . to derive the fenchel s dual of nonnegative lasso, we need to find and by theorem [ thm : fenchel_duality ] .since we have already seen that in section [ subsection : fenchel_dual_sgl ] , we only need to find .the following result is indeed a counterpart of lemma [ lemma : conjugate_omega_sgl ] .[ lemma : conjugate_omega_nnlasso ] let , , and .then , 1. and , where is the nonpositive orthant of . , where .we omit the proof of lemma [ lemma : conjugate_omega_nnlasso ] since it is very similar to that of lemma [ lemma : conjugate_omega_sgl ] .consider the second part of lemma [ lemma : conjugate_omega_nnlasso ] .let , where " is defined component - wisely .we can see that on the other hand , lemma [ lemma : infconv_sets ] implies that thus , we have . the second part of lemma [ lemma : conjugate_omega_nnlasso ] decomposes each into two components : and that belong to and , respectively . by theorem [ thm : fenchel_duality ] and lemma [ lemma : conjugate_omega_nnlasso ], we can derive the fenchel s dual of nonnegative lasso in the following theorem ( which is indeed the counterpart of theorem [ thm : dual_sgl ] ) .[ thm : dual_nnlasso ] for the nonnegative lasso problem , the following hold : 1 .the fenchel s dual of nonnegative lasso is given by : 2 . let and be the optimal solutions of problems ( [ prob : nnlasso ] ) and ( [ prob : nnlasso_dual ] ) , respectively.then , we omit the proof of theorem [ thm : dual_nnlasso ] since it is very similar to that of theorem [ thm : dual_sgl ] .the key to develop the dpc rule for nonnegative lasso is the kkt condition in ( [ eqn : kkt2_nnlasso ] ) .we can see that and = \begin{cases } 0,\hspace{13mm}\mbox{if } [ \mathbf{w}]_i>0,\\ \rho,\,\rho\leq0,\hspace{2mm}\mbox{if } [ \mathbf{w}]_i=0,\\ \end{cases } \right\}.\end{aligned}\ ] ] therefore , the kkt condition in ( [ eqn : kkt2_nnlasso ] ) implies that >0,\\ \varrho,\,\varrho\leq1,\hspace{2mm}\mbox{if } [ \beta^*(\lambda)]_i=0 . \end{cases}\end{aligned}\ ] ] by eq .( [ eqn : kkt3_nnlasso ] ) , we have the following rule : =0.\end{aligned}\ ] ] because is unknown , we can apply ( [ r3 ] ) to identify the inactive features which have coefficients in .similar to tlfre , we can first find a region that contains .then , we can relax ( [ r3 ] ) as follows : =0.\end{aligned}\ ] ] inspired by ( [ rrule3_nnlasso ] ) , we develop dpc via the following three steps : 1 .given , we estimate a region that contains .2 . we solve the optimization problem .3 . by plugging in computed from* step 2 * , ( [ rrule3_nnlasso ] ) leads to the desired screening method dpc for nonnegative lasso . in view of the fenchel s dual of nonnegative lasso in ( [ prob : nnlasso_dual ] ) , we can see that the optimal solution is indeed the projection of onto the feasible set , i.e. , therefore , if , eq .( [ eqn : dual_proj_nnlasso ] ) implies that .if further is an interior point of , [ rrule3_nnlasso ] implies that .the next theorem gives the set of parameter values leading to solutions of nonnegative lasso .[ thm : lambdamx_nnlasso ] for the nonnegative lasso problem ( [ prob : nnlasso ] ) , let .then , the following statements are equivalent : , , , .we omit the proof of theorem [ thm : lambdamx_nnlasso ] since it is very similar to that of theorem [ thm : lambda_alpha_sgl ] .we follow the three steps in section [ ssec : general_rule_nnlasso ] to develop the screening rule for nonnegative lasso .we first estimate a region that contains .because admits a closed form solution with by theorem [ thm : lambdamx_nnlasso ] , we focus on the cases with .[ thm : estimation_nnlasso ] for the nonnegative lasso problem , suppose that is known with .for any , we define then , the following hold : 1 . , 2 . .we only show that since the proof of the other statement is very similar to that of theorem [ thm : estimation_sgl ] . by proposition[ prop : normal_cone ] and theorem [ thm : lambdamx_nnlasso ] , it suffices to show that because , we have .the definition of implies that .thus , the inequality in ( [ ineqn : xmx_nnlasso ] ) holds , which completes the proof .theorem [ thm : estimation_nnlasso ] implies that is in a ball denoted by radius centered at .simple calculations lead to by plugging into ( [ rrule3_nnlasso ] ) , we have the dpc screening rule for nonnegative lasso as follows .for the nonnegative lasso problem , suppose that we are given a sequence of parameter values .then , =0 $ ] if is known and the following holds : evaluate tlfre for sgl and dpc for nonnegative lasso in sections [ ssec : exp_sgl ] and [ ssec : exp_nnlasso ] , respectively , on both synthetic and real data sets . to the best of knowledge ,the tlfre and dpc are the first screening methods for sgl and nonnegative lasso , respectively .we perform experiments to evaluate tlfre on synthetic and real data sets in sections [ sssec : exp_syn_sgl ] and [ sssec : exp_adni_sgl ] , respectively . to measure the performance of tlfre ,we compute the _ rejection ratios _ of ( [ rule : l1 ] ) and ( [ rule : l2 ] ) , respectively .specifically , let be the number of features that have coefficients in the solution , be the index set of groups that are discarded by ( [ rule : l1 ] ) and be the number of inactive features that are detected by ( [ rule : l2 ] ) .the rejection ratios of ( [ rule : l1 ] ) and ( [ rule : l2 ] ) are defined by and , respectively .moreover , we report the _ speedup _ gained by tlfre , i.e. , the ratio of the running time of solver without screening to the running time of solver with tlfre .the solver used in this paper is from slep . to determine appropriate values of and by cross validation or stability selection, we can run tlfre with as many parameter values as we need .given a data set , for illustrative purposes only , we select seven values of from .then , for each value of , we run tlfre along a sequence of values of equally spaced on the logarithmic scale of from to .thus , pairs of parameter values of are sampled in total .we perform experiments on two synthetic data sets that are commonly used in the literature .the true model is , .we generate two data sets with entries : synthetic 1 and synthetic 2 .we randomly break the features into groups . for synthetic 1 , the entries of the data matrix are i.i.d .standard gaussian with pairwise correlation zero , i.e. , . for synthetic 2 ,the entries of the data matrix are drawn from i.i.d .standard gaussian with pairwise correlation , i.e. , .to construct , we first randomly select percent of groups . then , for each selected group , we randomly select percent of features .the selected components of are populated from a standard gaussian and the remaining ones are set to .we set for synthetic 1 and for synthetic 2 .+ [ fig : synthetic1 ] the figures in the upper left corner of fig .[ fig : synthetic1 ] and fig .[ fig : synthetic2 ] show the plots of ( see corollary [ cor : lambdamx_sgl ] ) and the sampled parameter values of and ( recall that and ) . for the other figures , the blue and red regions represent the rejection ratios of ( [ rule : l1 ] ) and ( [ rule : l2 ] ) , respectively .we can see that tlfre is very effective in discarding inactive groups / features ; that is , more than of inactive features can be detected . moreover, we can observe that the first layer screening ( [ rule : l1 ] ) becomes more effective with a larger . intuitively , this is because the group lasso penalty plays a more important role in enforcing the sparsity with a larger value of ( recall that ) .the top and middle parts of table [ table : tlfre_runtime_sync ] indicate that the speedup gained by tlfre is very significant ( up to times ) and tlfre is very efficient .compared to the running time of the solver without screening , the running time of tlfre is negligible .the running time of tlfre includes that of computing , , which can be efficiently computed by the power method . indeed , this can be shared for tlfre with different parameter values . + [ fig : synthetic2 ] l c|c|c|c|c|c|c|c| & & & & & & & & + + [ -2.5ex ] & & 298.36 & 301.74 & 308.69 & 307.71 & 311.33 & 307.53 & 291.24 + & & 0.77 & 0.78 & 0.79 & 0.79 & 0.81 & 0.79 & 0.77 + & & 10.26 & 12.47 & 15.73 & 17.69 & 19.71 & 21.95 & 22.53 + & & * 29.09 * & * 24.19 * & * 19.63 * & * 17.40 * & & * 14.01 * & * 12.93 * + & & 294.64 & 294.92 & 297.29 & 297.50 & 297.59 & 295.51 & 292.24 + & & 0.79 & 0.80 & 0.80 & 0.81 & 0.81 & 0.81 & 0.82 + & & 11.05 & 12.89 & 16.08 & 18.90 & 20.45 & 21.58 & 22.80 + & & * 26.66 * & * 22.88 * & * 18.49 * & * 15.74 * & * 14.55 * & * 13.69 * & * 12.82 * + + [ fig : adni_gmv ] + l c|c|c|c|c|c|c|c| & & & & & & & & + + [ -2.5ex ] & & 30652.56 & 30755.63 & 30838.29 & 31096.10 & 30850.78 & 30728.27 & 30572.35 + & & 64.08 & 64.56 & 64.96 & 65.00 & 64.89 & 65.17 & 65.05 + & & 372.04 & 383.17 & 386.80 & 402.72 & 391.63 & 385.98 & 382.62 + & & * 82.39 * & * 80.27 * & * 79.73 * & * 77.22 * & * 78.78 * & * 79.61 * & * 79.90 * + & & 29751.27 & 29823.15 & 29927.52 & 30078.62 & 30115.89 & 29927.58 & 29896.77 + & & 62.91 & 63.33 & 63.39 & 63.99 & 64.13 & 64.31 & 64.36 + & & 363.43 & 364.78 & 386.15 & 393.03 & 395.87 & 400.11 & 399.48 + & & * 81.86 * & * 81.76 * & * 77.50 * & * 76.53 * & * 76.08 * & * 74.80 * & * 74.84* + we perform experiments on the alzheimer s disease neuroimaging initiative ( adni ) data set ( http://adni.loni.usc.edu/ ) .the data matrix consists of samples with nucleotide polymorphisms ( snps ) , which are divided into groups .the response vectors are the grey matter volume ( gmv ) and white matter volume ( wmv ) , respectively . the figures in the upper left corner of fig .[ fig : adni_gmv ] and fig .[ fig : adni_wmv ] show the plots of ( see corollary [ cor : lambdamx_sgl ] ) and the sampled parameter values of and .the other figures present the rejection ratios of ( [ rule : l1 ] ) and ( [ rule : l2 ] ) by blue and red regions , respectively .we can see that almost all of the inactive groups / features are discarded by tlfre .the rejection ratios of are very close to in all cases .table [ table : tlfre_runtime_real ] shows that tlfre leads to a very significant speedup ( about times ) . in other words , the solver without screening needs about eight and a half hours to solve the sgl problems for each value of .however , combined with tlfre , the solver needs only six to eight minutes .moreover , we can observe that the computational cost of tlfre is negligible compared to that of the solver without screening .this demonstrates the efficiency of tlfre . in this experiment, we evaluate the performance of dpc on two synthetic data sets and six real data sets .we integrate dpc with the solver to solve the nonnegative lasso problem along a sequence of parameter values of equally spaced on the logarithmic scale of from to .the two synthetic data sets are the same as the ones we used in section [ sssec : exp_syn_sgl ] .to construct , we first randomly select percent of features .the corresponding components of are populated from a standard gaussian and the remaining ones are set to 0 .we list the six real data sets and the corresponding experimental settings as follows . 1 .* breast cancer data set * : this data set contains gene expression values of tumor samples ( thus the data matrix is of ) .the response vector contains the binary label of each sample .2 . * leukemia data set * : this data set contains gene expression values of samples ( ) .the response vector contains the binary label of each sample .3 . * prostate cancer data set * : this data set contains measurements of 132 patients ( ) . by proteinmass spectrometry , the features are indexed by time - of - flight values , which are related to the mass over charge ratios of the constituent proteins in the blood .the response vector contains the binary label of each sample .4 . * pie face image data set* : this data set contains gray face images ( each has pixels ) of people , taken under different poses , illumination conditions and expressions . in each trial, we first randomly pick an image as the response , and then use the remaining images to form the data matrix .we run trials and report the average performance of dpc .* mnist handwritten digit data set * : this data set contains grey images of scanned handwritten digits ( each has pixels ) .the training and test sets contain and images , respectively .we first randomly select images for each digit from the training set and get a data matrix .then , in each trial , we randomly select an image from the testing set as the response .we run trials and report the average performance of the screening rules .* street view house number ( svhn ) data set * : this data set contains color images of street view house numbers ( each has pixels ) , including images for training and for testing . in each trial , we first randomly select an image as the response , and then use the remaining ones to form the data matrix .we run trials and report the average performance .we present the _rejection ratios_the ratio of the number of inactive features identified by dpc to the actual number of inactive features in fig .[ fig : dpc_rej_ratio ] .we also report the running time of the solver with and without dpc , the time for running dpc , and the corresponding _ speedup _ in table [ table : dpc_runtime ] .[ fig : dpc_rej_ratio ] shows that dpc is very effective in identifying the inactive features even for small parameter values : the rejection ratios are very close to for the entire sequence of parameter values on the eight data sets .table [ table : dpc_runtime ] shows that dpc leads to a very significant speedup on all the data sets .take mnist as an example .the solver without dpc takes minutes to solve the nonnegative lasso problems .however , combined with dpc , the solver only needs seconds .the speedup gained by dpc on the mnist data set is thus more than times .similarly , on the svhn data set , the running time for solving the nonnegative lasso problems by the solver without dpc is close to seven hours .however , combined with dpc , the solver takes less than two minutes to solve all the nonnegative lasso problems , leading to a speedup about times .moreover , we can also observe that the computational cost of dpc is very low which is negligible compared to that of the solver without dpc .+ [ fig : dpc_rej_ratio ] l c|c|c|c|c|c|c|c| & & synthetic 2 & breast cancer & leukemia & prostate cancer & pie & mnist & svhn + + [ -2.5ex ] & 218.37 & 204.06 & 23.40 & 34.04 & 187.82 & 674.04 & 3000.69 & 24761.07 + & 0.31 & 0.29 & 0.03 & 0.06 & 0.23 & 1.16 & 3.53 & 30.59 + & 5.52 & 6.10 & 2.18 & 3.37 & 6.37 & 5.01 & 9.31 & 104.93 + & * 39.56 * & * 33.45 * & * 10.73 * & * 10.10 * & * 29.49 * & * 134.54 * & * 322.31 * & * 235.98 * +in this paper , we propose a novel feature reduction method for sgl via decomposition of convex sets .we also derive the set of parameter values that lead to zero solutions of sgl . to the best of our knowledge , tlfre is the first method which is applicable to sparse models with multiple sparsity - inducing regularizers .more importantly , the proposed approach provides novel framework for developing screening methods for complex sparse models with multiple sparsity - inducing regularizers , e.g. , svm that performs both sample and feature selection , fused lasso and tree lasso with more than two regularizers . to demonstrate the flexibility of the proposed framework, we develop the dpc screening rule for the nonnegative lasso problem .experiments on both synthetic and real data sets demonstrate the effectiveness and efficiency of tlfre and dpc .we plan to generalize the idea of tlfre to svm , fused lasso and tree lasso , which are expected to consist of multiple layers of screening .by introducing an auxiliary variable the sgl problem in ( [ prob : sgl ] ) becomes : let be the lagrangian multiplier , the lagrangian function is let to derive the dual problem , we need to minimize the lagrangian function with respect to and . in other words , we need to minimize and , respectively .we first consider by the fermat s rule , we have which leads to by noting that we have thus , we can see that moreover , because , eq .( [ eqn : opt_cond_minf1_sgl ] ) implies that in view of eq .( [ eqn : lagrangian_sgl ] ) , eq .( [ eqn : minf1_sgl ] ) , eq .( [ eqn : minf2_sgl ] ) and eq .( [ eqn : dual_feasible_lagrangian_sgl ] ) , the dual problem of sgl can be written as which is equivalent to ( [ prob : dual_sgl_lagrangian ] ) .recall that and are the primal and dual optimal solutions of sgl , respectively . by eq .( [ eqn : auxiliary_sgl ] ) , eq .( [ eqn : opt_cond0_minf1_sgl ] ) and eq .( [ eqn : opt_cond_minf2_sgl ] ) , we can see that the kkt conditions are
|
sparse - group lasso ( sgl ) has been shown to be a powerful regression technique for simultaneously discovering group and within - group sparse patterns by using a combination of the and norms . however , in large - scale applications , the complexity of the regularizers entails great computational challenges . in this , we propose a novel * * t**wo-**l**ayer * * f**eature * * re**duction method ( tlfre ) for sgl via a decomposition of its dual feasible set . the -layer reduction is able to identify the inactive groups and the inactive features , respectively , which are guaranteed to be absent from the sparse representation and can be removed from the optimization . existing feature reduction methods are only applicable for sparse models with one sparsity - inducing regularizer . to our best knowledge , tlfre is _ the first one _ that is capable of dealing with _ multiple _ sparsity - inducing . moreover , tlfre has a very low computational cost and can be integrated with any existing solvers . we also develop a screening method called dpc ( * * d**ecom**p**osition of * * c**onvex set)for the nonnegative lasso problem . experiments on both synthetic and real data sets show that tlfre and dpc improve the efficiency of sgl and nonnegative lasso by several orders of magnitude .
|
cloud computing has emerged as a trending platform for hosting various services for the public . among different cloud computing platforms ,the infrastructure - as - a - service ( iaas ) clouds offer virtualized computing infrastructures to public tenants in order to host tenant services . enabled by advanced resource virtualization technologies ,the clouds support intelligent resource sharing among multiple tenants , and can provision resources per tenant demand .however , due to the massive migration of services to the cloud , there is increasing concern about the unpredictable performance of cloud - based services . one major cause is the lack of network performance guarantee .all the tenants have to compete in the congested cloud network in an unorganized manner .this has motivated recent efforts on cloud resource sharing with network bandwidth guarantee , for which a novel cloud service abstraction has been proposed , named virtual cluster ( vc ) .the vc abstraction allows each tenant to specify both the virtual machines ( vms ) and per - vm bandwidth demand of its service .the cloud then realizes the request by allocating vms on physical machines ( pms ) , as well as reserving sufficient bandwidth in the network to guarantee the bandwidth demand in the _ hose model _ .the process of resource allocation for virtual clusters is called _ virtual cluster embedding _ ( vce ) .algorithms have been developed for vce with various objectives and constraints .one missing perspective in existing vce solutions is the _ availability _ of tenant services . due to the large - scale nature of cloud data centers, pm failures can happen frequently in the cloud .when such failure happens , all services who have their vcs fully or partly embedded on the failed pms will be affected , possibly receiving degraded service performance or even interruption of operation .this not only impairs the tenants interests , but also incurs additional cost to the cloud due to violation of service - level agreements . to achieve the high availability goal of tenant services ,one common practice is to enable service _ survivability _ , by utilizing extra resources to help services recover quickly when actual failures happen . a survivability mechanism can be either _ pro - active _ or _ reactive_. a pro - active mechanism provisions backup resources at the time of service provisioning , prior to the actual happening of failures . due to this , it can offer guaranteed recovery against a certain level of failures in the substrate , at the cost of underutilized resources when no failure is present . on the contrary ,a reactive mechanism only looks for backups as a reaction to actual failures .while this means less reserved resources in the normal operation , a reactive mechanism may not always find a feasible recovery during the failure , and thus can not guarantee the survivability of the service . in this paper, we study how to efficiently provide pro - active protection for tenant services under the vc model .in particular , we aim at embedding tenant vc requests survivably such that they can recover from any single - pm failure in the data center , meanwhile minimizing the total amount of resources reserved for each tenant .we formally define survivable vc embedding as a joint resource optimization problem of both primary and backup embeddings of the vc .following existing work , we assume the data center has a tree structure , which abstracts many widely - adopted data center architectures .we then propose an algorithm to optimally solve the embedding problem , within time bounded by a polynomial of the network size and the number of requested vms ( pseudo - polynomial time to input size ) .the algorithm is based on the observation that the embedding decisions are independent for each subtree in the same level .since the optimal approach is time - consuming , we further propose a faster heuristic algorithm , whose performance is comparable to the optimal in practical settings .we conduct both theoretical analysis and simulation - based performance evaluation , which have validated the effectiveness of our proposed algorithms .our main contributions are summarized as follows : to the best of our knowledge , we are the first to study the survivable and bandwidth - guaranteed vc embedding problem with joint primary and backup optimization .we propose a pseudo - polynomial time algorithm that finds the most resource - efficient survivable vc embedding for each tenant request .we further propose a heuristic algorithm that reduces the time complexity of the optimal algorithm by several orders , yet has similar performance in the online scenario .we use extensive experiments to evaluate the performance of our proposed algorithms .the rest of this paper is organized as follows .section [ sec : rw ] presents the background and related work on vc embedding , as well as survivable cloud service provisioning .section [ sec : model ] describes the network service model , introduces our pro - active survivability mechanism , and formally defines the survivable vc embedding problem .section [ sec : a ] presents our optimal algorithm , and theoretical analysis for the proposed algorithm .section [ sec : heu ] presents our efficient heuristic algorithm and proves its feasibility .section [ sec : eval ] shows the evaluation results of our proposed algorithms , compared to a baseline algorithm .section [ sec : conclusions ] concludes this paper .virtual cluster ( vc ) is a newly proposed cloud service abstraction , which offers bandwidth guarantee over existing vm - based abstractions . in the vc model, the tenant submits its service request in terms of both the number of vms and the per - vm bandwidth demand .a tenant request , defined as a tuple , specifies a virtual topology where uniform vms are connected to a central virtual switch , each via a virtual link with bandwidth of , as shown in fig .[ fig : vc ] . to fulfill the request ,the cloud should provision vms in the substrate data center , with bandwidth guaranteed in the _ hose model _ ( to be detailed in section [ sec : model ] ) . in short words ,hose model brings two major benefits : reduced model complexity ( user specifies per - vm bandwidth instead of per - vm pair bandwidth as in the traditional _ pipe model _ ) , and simple characterization of the minimum bandwidth requirement on each link ; interested readers are referred to and for details .ballani _ et al . _ first proposed the vc abstraction for cloud services with hose - model bandwidth guarantee .they characterized the minimum bandwidth required on each link to satisfy the hose - model bandwidth guarantee , and developed a recursive heuristic for computing the vc embedding with minimum bandwidth consumption .based on it , zhu _ et al . _ proposed an optimal dynamic programming algorithm to embed vc in the lowest subtree in tree - like data center topologies .they also proposed a heuristic algorithm for vcs with heterogeneous bandwidth demands of their vms .tivc extends oktopus with a time - related vc model that takes into consideration the dynamic traffic patterns of cloud services .svc also extends oktopus , and considers the statistical distribution of bandwidth demands between vms .it proposes another dynamic programming algorithm to tackle the uncertain bandwidth demands .dcloud incorporates deadline constraints into the vc abstraction . instead of guaranteeing per - vm bandwidth, it guarantees that each accepted job will finish execution within its specified deadline . in a recent work ,rost _ et al . _ proposed that the vc embedding problem could be solved in polynomial time . yet, their model does not capture the minimum bandwidth required on each link to satisfy vm bandwidth requirements , which was characterized originally in ; as a result , their solution may over - provision bandwidth for vcs .recently , elastic bandwidth guarantee has drawn attention .yu _ et al . _ proposed dynamic programming algorithms for dynamically scaling vcs , optimizing virtual cluster locality and vm migration cost .fuerst _ et al . _ also studied vc scaling minimizing migrations and bandwidth ; their approach relies on the concept of center - of - gravity , which is determined by the location of the central switch .none of the above has considered survivability in vc embedding .existing survivability mechanisms , such as those shown in the next subsection , do not lead to satisfactory solutions when directly applied to vc embedding , due to their lack of consideration for bandwidth requirement and/or lack of performance guarantee .this paper focuses on deriving theoretically guaranteed solutions for the survivable vc embedding problem , as well as promising ( low - complexity ) heuristics .other problems similar to vc embedding include bandwidth - guaranteed vm embedding and virtual network / infrastructure embedding .the former problem is topology - agnostic , and only considers bandwidth on edge links ; the latter considers a more general model where the virtual topology can be arbitrary graphs , hence it commonly suffers from high model complexity ( to be detailed below ) . providingsurvivability guarantee for vcs has been studied by alameddine _ et al . _given a fixed primary embedding of a vc , they proposed a heuristic solution to ensure 100% survivability for the vc with minimum backup vms and bandwidth .they further considered inter - vc bandwidth sharing to reduce backup bandwidth in .however , their solutions did not consider the impact of the primary embedding to backup resource consumption . in this paper, we propose to jointly optimize both primary and backup resources of a vc , which can result in reduced backup resource consumption .also , we propose an optimal solution , rather than heuristic solutions in . beyond the vc abstraction , many have studied offering survivable virtual cloud services under various computing and network models .a first line of research focuses on providing survivable vm hosting in the cloud .nagarajan _ et al . _ proposed the first pro - active vm protection method , which leverages the live migration capability of the xen hypervisor to protect vms from detected failures .based on this , machida _ et al . _ studied redundant vm placement to protect a service from pm failures .bin _ et al . _ also studied vm placement for -survivability , and proposed a shadow - based solution for vms with heterogeneous resource demands .the above papers do not offer bandwidth guarantee for vms .our work utilizes a similar survivability mechanism as in , where a predicted physical failure will trigger migration of the affected vms to its backup location .yet we consider bandwidth guarantee in addition to vm placement , which complicates the problem and differentiates our work from the above . along another line ,many solutions focus on survivable service hosting using the virtual infrastructure ( vi ) abstraction .a vi is a general graph , where each node or link may have a different resource demand , and an embedding is defined as two mappings : virtual node ( vm ) mapping and virtual link mapping ; the _ pipe model _is used in the vi abstraction instead of the hose model as in the vc abstraction .for example , yeow _ et al . _ , yu _ et al . _ and xu _ et al . _ investigated survivable vi embedding through redundancy .they formulated the problem with various objectives and constraints , and designed heuristic algorithms . from a different angle , zhang _ et al . _ proposed heuristic algorithms to embed vis based on the availability statistics of the physical components .the vi abstraction is more general than the vc abstraction , however , it is both hard to analyze theoretically and difficult to implement in large - scale networks due to its intrinsic model complexity .an even more general model was proposed by guo _ et al . _ , where a bandwidth requirement matrix is used to describe the bandwidth demand between each and every pair of virtual nodes ; it suffers from even higher model complexity than the vi abstraction .cloudmirror proposes a tenant application graph model for bandwidth guarantee , and discusses a heuristic opportunistic solution for high - availability that balances between bandwidth saving and availability .bodik _ et al . _ studied general service survivability in bandwidth - constrained data centers .based on the service characteristics of bing , they proposed an optimization framework and several heuristics to maximize fault - tolerance meanwhile minimizing bandwidth under the pipe model ( per - vm pair bandwidth demand ) .survivability has been studied extensively in conventional communication networks and optical networks . existing work focuses on providing connectivity guarantee against network link and switch failures .the problem studied in this paper focuses on protecting tenant services from pm failures , which are different from link and switch failures .we study service provisioning in an iaas cloud environment , where the cloud offers services in the form of inter - connected vms .to request a service , the tenant submits its request in terms of both vms and network bandwidth .a cloud hypervisor processes requests in an online manner . for each request , the hypervisor first attempts to allocate enough resources in the data center .if the allocation succeeds , it then reserves the allocated resources and provisions the vc for the tenant .the vc will exclusively use all reserved resources until the end of its usage period , when the hypervisor will then revoke all allocated resources of the vc .if the allocation fails due to lack of resources , the hypervisor rejects the request .formally , each tenant request is defined as , where is the number of requested vms , and is the per - vm bandwidth demand . following existing work , we assume that the data center has a tree - structure topology .in fact , many commonly used data center architectures have tree - like structures ( fattree , vl2 , etc . ;see section [ sec : disc ] ) , where our proposed algorithms can be adopted with simple abstraction of the substrate .the substrate data center is defined as an undirected tree , where is the set of nodes , and is the set of physical links .the node set is further partitioned into two subsets , where is the set of pms that host vms , and is the set of abstract switches which perform networking functions . note that each abstract switch can represent a group of physical switches in the data center .each pm is a leaf node , while each switch is an intermediate node in the topology . without loss of generality, the substrate can be viewed as a rooted tree , and we pick a specific node as its root , which generally represents all core switches . for each node , we use to denote the subtree rooted at .we use to denote the out - bound link of , _i.e. _ , the link adjacent to node and on the shortest path from to global root .we use to denote the number of children of in the tree . for each pm , we define as the number of available vm slots on . for each node , we define as the available bandwidth on its out - bound link .to fulfill a request , the cloud needs to allocate vms with bandwidth guarantee in _ the hose model _given subtree , let be the number of vms allocated in , then the minimum bandwidth required on link is given by _i.e. _ , the bandwidth demand of vms either inside or outside , whichever is smaller . in other words , given link bandwidth , the number of vms allocated within , , must satisfy \cup [ n - { b_v \over b } , n]\ ] ] for simplicity of illustration , we define \cup [ n - { b_v \over b } , n ] \right ) \cap [ 0 , n] ] .we also define as the lower bound of the upper feasible range , if . given substrate and request , a * virtual cluster embedding ( vce ) * is defined by a vm allocation function , denoting the number of vms allocated on each host , which satisfies the following properties : + 1 ) for any , + 2 ) for any , where , and + 3 ) . note that bandwidth allocation is implicitly defined , as it can be computed based on vm allocation as in eq . .[ fig : vcea ] shows an embedding of the tenant vc request on a -level -ary tree topology rooted at node , where pm has vm slots and pms each has vm slots , and each link has bandwidth of . vms are allocated on pms , and to fulfill the request .note that link bandwidth constraints are satisfied based on eq ., as shown . for example, although the subtree rooted at switch contains working vms , only bandwidth is required on its out - bound link as shown .if a tenant request is accepted , it then exclusively uses all its allocated resources , including both vm slots for and bandwidth for .this ensures guaranteed resources to the tenant service , leading to predictable service performance . finding vce has been addressed in . existing work for vce does not consider service availability against physical failures .for example , when a pm fails , all services with vms hosted on it will be interrupted . moreover , due to lack of pre - provisioned backup resources , the cloud may not be able to recover the affected services in a short period of time . this will lead to violated service - level agreements , and further economic losses to both the tenants and the cloud .we use a pro - active survivability mechanism to improve service availability .the idea is to pre - reserve dedicated backup resources for each service , and pre - compute the recovery plan against any possible failure scenario , during the initial embedding process . during the life cycle of the service , a predicted physical failure will trigger the pre - determined automatic failover process , which will migrate the affected vms to their backups.this way , the interruption period of the service is minimized . note that while failures are frequent in the cloud , simultaneous pm failures are relatively rare .hence we only focus on the single - pm failure scenario , where each failure is defined by the failed pm alone : .link and switch failures are not considered , as modern data centers typically have rich path diversity between any pair of pms , which can effectively protect over these failures .to realize this mechanism , the key point is to reserve sufficient backup resources during the initial embedding process . specifically , the cloud needs to reserve both backup vm slots and backup bandwidth for the service . to characterize the total vm and bandwidth consumption , and the survivability guarantee of a vc, we define the following concept : given substrate and request , a * survivable virtual cluster embedding ( svce ) * is defined by a tuple of allocation functions , with denoting the total number of vms allocated on each pm , and denoting the total bandwidth allocated on each link , such that during any single - pm failure , there still exists a _ vce _ of , in the auxiliary topology with resources on nodes and links defined as the remaining allocated resources , _i.e. _ , for and for . the above definition does not explicitly require a vce in the normal operation ( when no failure happens ) .such requirement is implicit , because after allocation , the vce for any failure scenario can be used in the normal operation .we call the vce used in the normal operation as the _ primary working set _ ( pws ) , and the vce used in failure as the _ recovery working set _ ( rws ) regarding . we also use _ working vms _ to denote the set of vms that are used ( active ) in a specific scenario , compared to the set of _ backup vms _ that remain inactive .note that given the svce , both working sets can be easily computed using existing vce algorithms .the cloud pre - computes these vces in all possible scenarios , hence when failure happens , the recovery process can quickly find the backup resources needed for each affected vm . fig .[ fig : vcea ] shows an svce of the tenant request .compared to the vce which allocates exactly vms , in total vms are provisioned in the svce . during the failure of any pm , 1 ) the number of remaining vms is always no less than , and 2 ) a vce exists under the hose model ( with no more than , and vms on one side of the link pm , the link pm and any other link respectively ) .hence the given svce can always recover the requested vc during any failure .note that we can assign the rws during arbitrary failure as the pws . in this example , the dark blue vm slots on pm to are assigned as the pws , while the green vm slots are backups .the problem we study is to find the svce that uses minimum resources in the cloud .moreover , we are interested in finding the svce that occupies the minimum number of vm slots , in order to accommodate as many future requests as possible .formally , we study the following optimization problem : given substrate and request , the * survivable virtual cluster embedding problem ( svcep ) * is to find an svce of request that consumes the minimum number of vm slots in the substrate . the necessity of resource optimization is illustrated in figs .[ fig : vcea ] and [ fig : vcea ] . while fig .[ fig : vcea ] indeed shows an svce of , it consumes backup vms , due to that a single failure at pm will affect vms . on the contrary , fig .[ fig : vcea ] shows a different svce that consumes only backup vm slots , and is optimal regarding total vm slots consumption . withless consumed resources , the data center can accept more tenant requests in the future .note that although we focus on minimizing vm consumption , our proposed algorithms can be extended to minimize bandwidth as well ; see section [ sec : disc ] . in the next two sections, we will present our proposed algorithms for solving svce .we start from designing an algorithm that solves svcep optimally .the algorithm works in a bottom - up manner : starting from the leaf nodes up to the root , the algorithm progressively determines the minimum number of total vms needed in the subtree rooted at each node , each time solving a generalized problem of svcep .formally , define the following generalization of svcep : given substrate , request , an arbitrary node , and two nonnegative integers ] , the generalized problem * svcep - gp * seeks to find the minimum number of vms needed in , to ensure that can provide _ at least _ working vms when no failure happens in , and _ at least _ working vms during arbitrary ( single - pm ) failure in . both and concern not only the vm slots that can be offered by each child subtree of node , but also the node s out - bound bandwidth . in other words, there exists a feasible solution to svcep - gp with , and if and only if there exist two integers and , such that and all child subtrees of can jointly offer exactly and vms in the normal and worst - case failure scenarios ( failure resulting in minimum number of available vms ) respectively . note that given , and , a feasible solution of svcep - gp does not guarantee that the subtree can provide _ exactly _ vms if no failure happens in , or vms if arbitrary failure happens in .it only requires that can offer _ at least _ or vms in either scenario respectively .for example , a subtree with out - bound bandwidth of can offer vms for request if its child subtrees can jointly offer vms , but can not offer exactly vms due to lack of bandwidth in the hose model .however , as we will prove in the next subsection , the optimal solution to svcep - gp with and yields an optimal solution to the original svcep , and vice versa . utilizing the above subproblem structure , we propose the following dynamic programming ( dp ) algorithm to compute the optimal solution by solving a sequence of svcep - gp instances . define ] as the minimum number of total vms in , to ensure that can provide at least working vms in the normal operation , and at least working vms during arbitrary failure in , _ using the first subtrees of _ , where .note that ] and ] from : if can not support ( ) working vms in , then we take the minimum value that both can be supported by and is at least ( ) as desired .note that both ] are non - decreasing in either or based on definition .hence the above defined ] values , for and . *inner dp : * the value of ] based on the values ] computed for the -th child node of node , as shown in eq . . = { \min\limits_{\substack{n_0 ' , n_0 '' , \\ n_1 ' , n_1 '' } } \left\ { n'_v[n_0 ' , n_1 ' , k-1 ] + n_{u_k}[n_0 '' , n_1 '' ] \left|\ , \begin{array}{*{20}{l } } { \ } \\ { \ } \end{array } \right .\right . }\\ { \left .\begin{array}{*{20}{l } } { n_0 ' + n_0 '' \ { \ge } \{ \min \ { n_0 ' + n_1'',n_0 '' + n_1 ' \ } \ { \ge } \ { n_1 } }\end{array } \right\ } } \ ] ] _ explanation _ : the computation of ] in a bottom - up manner .the order of computation guarantees that during the computation of an entry in either or , all its depending entries have already been computed in previous iterations .after computation , if the value ] working vms in a scenario , then it can offer any number of working vms less than or equal to in the same scenario without increasing bandwidth on any link. [ l:2 ] given an allocation of vms in any subtree for , if the subtree can offer _ more than _ working vms in a scenario , then it can offer _ exactly _ vms in the same scenario , without increasing bandwidth on any link . [ th:1 ]given an instance of svcep , algorithm [ a:1 ] returns the optimal solution if the instance is feasible , and returns `` infeasible '' otherwise . [ th:2 ] the worst - case time complexity of algorithm [ a:1 ] is bounded by , where is the network size and is the request size ( the number of vms requested ) . based on theorem [ th:1 ] , the solution is guaranteed to offer a feasible vce of using the allocated resources when facing any single pm failure . to find the rws for each failure, one can apply existing vce algorithms in the auxiliary topology where vm slots and bandwidth are the same as allocated except for the failed pm . as mentioned in section [ sec : surv ] , the rws of any failure can be used as the pws .the algorithm proposed in section [ sec : a ] optimally solves svcep. however , its worst - case time complexity can be as high as , which may be too expensive when a tenant asks for many vms . in this section, we propose an efficient heuristic algorithm that runs in time . before the algorithm, we first state the following lemma , whose proof is also detailed in the appendix : [ l : heu ] given substrate , request , and an integer ] means that the subtree can offer , or vms .the algorithm progressively computes the ar of every node , from leaves to root .it then finds the lowest subtree that can offer vms , and makes allocation through backtracking .the algorithm in finds a vce within time .algorithm [ a:2 ] calls for at most times , hence it has time complexity .algorithm [ a:2 ] does not guarantee optimality .in fact , we can construct simple examples for which it fails to find an svce , yet one with the optimal objective can be found by our optimal algorithm . due to page limit , we omit the examples here .however , as shown in section [ sec : eval ] , this heuristic algorithm has similar performance to the per - request optimal solution proposed in section [ sec : a ] when working in the online manner , but is several orders more time - efficient .therefore it is practically important for providing fast response to tenants in the cloud ._ shadow - based solution _ ( sbs ) is a well - known failover provisioning solution for vm management . in sbs , each primary vm is protected by a dedicated backup vm ( called _ shadow _ ) .different primary vms do not share any common backup vm . to employ sbs for vcs ,both vms and bandwidth need to be shadowed .we designed a heuristic bandwidth - aware algorithm for sbs as our baseline algorithm .it works as follows : given a request , the algorithm seeks to find one primary vce , as well as one shadow vce , on two disjoint sets of pms respectively .when making the primary vce , the algorithm seeks to minimize the pms used , using a modified algorithm as in , therefore leaving more room for the shadow .a request is accepted only when the network accommodates both the primary vce and the shadow .we compared our proposed algorithms ( opt for the optimal algorithm and heu for the heuristic algorithm ) to this baseline algorithm ( sbs ) to show how resources are conserved to serve more requests by our optimization algorithms . *_ acceptance ratio _ is the number of fulfilled requests over total requests , which directly reflects an algorithm s capability in serving as many requests as possible . * _ average vm consumption ratio _ is defined as the average ratio of actual vm slot consumption over the requested vms , namely , for each request .note that this only counts those requests accepted by all three algorithms , in order to make fair comparison . *_ average running time _ reflects how much time an algorithm spends in average to determine a solution ( or rejection ) of each incoming request .we developed a c++-based simulator to evaluate our proposed algorithms .the substrate was simulated as a -layer -ary tree , including the pms .each pm has vm slots , and pms are connected to a top - of - rack ( tor ) switch each via a link .tor switches are connected to one aggregation switch , and aggregation switches to the core , both via links .we conducted experiments in two scenarios : the static scenario and the dynamic scenario . in the static scenario, we used the same network information and the same tenant request in each experiment ; hence no resource was reserved after the acceptance of a request . to simulate realistic network states ,we randomly generated load on pms and links . specifically , given a load factor , we randomly occupied a fraction of the vm slots on each pm and bandwidth on each link , according to a normal distribution with mean of and standard deviation of .we then randomly generated tenant requests each requesting vms and per - vm bandwidth on average with a normal distribution , and tested each of them on the network with random load .in the dynamic scenario , we generated randomly arriving tenant requests , and embedded them in the initially unoccupied network in an online manner . in each experiment requests were generated , which arrive in a poisson process with mean arrival interval of and mean lifetime of .each request asks for vms and per - vm bandwidth on average , generated with a normal distribution .resources were reserved after the acceptance of a request , hence existing vcs in the system would have impact on the embedding of future incoming vcs .each experiment was repeated for 20 times in the same setting , and the results were averaged over all runs . in both scenarios , we varied one system parameter in each series of experiments , while keeping other parameters as default .experiments were run on a ubuntu linux pc with quad - core 3.4ghz cpu and 16 gb memory .[ fig : load ] shows the acceptance ratio and the vm consumption ratio with increasing network load ( overall bandwidth consumption is similar to fig .[ fig : load ] and is not shown due to page limit ) .we observed that the opt algorithm outperforms both heu and sbs in terms of both number of requests accepted and the per - request vm consumption ratio , due to its optimality . on the other hand ,[ fig : load ] shows that the heu algorithm performs less preferably than the sbs baseline in terms of acceptance ratio , when the network is loaded .further analysis reveals that heu commonly requires more bandwidth from upper layer links , which are heavily congested by the random load , hence heu s acceptance ratio is affected .however , as can be observed in fig .[ fig : load ] , heu consumes much less vms than sbs per accepted request . due to this , it is more likely for heu to receive better performance when employed as an online scheduler , due to its capability in conserving cloud resources .as will be shown next , heu indeed outperforms sbs greatly in the dynamic experiments .sbs always consumes the vms requested , as it provisions an entire duplicate of the primary vms . per - requestvm consumption increases slightly with the increasing load , due to that it is harder to find a survivable embedding with few backup vms when the network is short of bandwidth .[ fig : vms ] shows the experiment results with varying average number of requested vms per tenant request .opt obviously achieves the best acceptance ratio in all scenarios , while heu s acceptance ratio is only slightly lower than opt .both algorithms have much higher acceptance ratio compared to the sbs baseline , due to their capability to conserve vm ( and bandwidth ) resources per tenant request .meanwhile , heu has much shorter running time than opt in all cases due to its low time complexity , and is only a little worse than sbs in most scenarios . as each tenant asking for more vms , acceptance ratio drops while running time increasesthis is due to that the running time of all algorithms are related to the per - tenant request size .[ fig : dmd ] shows the experiment results with varying average per - vm bandwidth .the acceptance ratio results show similar pattern as in fig .[ fig : vms ] , where opt performs the best , heu performs slightly worse , and sbs performs much worse compared to the former two .acceptance ratio drops as per - vm bandwidth increases . as for running time, clearly heu and sbs are both much better than opt due to their low complexity . unlike fig .[ fig : vms ] , running time drops as per - vm bandwidth increases .it has two reasons .first , the worst - case time complexity of each algorithm is not related to the per - vm bandwidth .second , as per - vm bandwidth increases , the search spaces decrease due to more consumed network resources . in the last set of experiments, we varied the network size .each topology is a -level -ary tree , which has pms , and switches .we varied from to . with a larger network size ( and thus more vms and bandwidth ) ,the acceptance ratio of all algorithms increases in fig . [ fig : ary ] . opt and heu both outperform sbs due to resource conservation .the running time of all algorithms increase with the tree ary number in fig .[ fig : ary ] .as the network itself grows linearly in the logarithmic scale , all algorithms show linear or nearly linear running time growth .we summarize our findings as follows : 1 .opt guarantees per - request optimality , and has the best performance in both static and dynamic scenarios ; heu shows low acceptance ratio in the static case , but has much higher acceptance ratio in the dynamic case due to resource conservation ; sbs consumes too much resources and hence performs the worst in the dynamic case .2 . compared to opt , heu has much better time efficiency , which is a great advantage in practice ; however , opt is still important when 1 ) tenant requests are small in general , 2 ) cloud resources are very scarce , or 3 ) future researches along the same line need to compare with a theoretically ( per - request ) optimal solution .* resource optimization : * our current solutions focus on minimizing the number of backup vms .however , they can be extended to other objectives , such as minimization of total bandwidth .specifically , instead of the minimum number of vms , the minimum bandwidth to achieve a specific pair is computed for each node .the aggregation process incorporates the bandwidth consumed both in lower levels and in the current level .we omit more details due to page limit .* simultaneous pm failures : * our proposed algorithms protect from any single pm failure in the substrate .they can be extended to cover multiple simultaneous pm failures , at the cost of exponentially increased time complexity regarding the number of failures to be covered .specifically , the extension involves adding dimensions into the dynamic programming , where is the number of covered simultaneous failures . as our future work , more efficient algorithms for covering multiple simultaneous pm failures are to be developed .-ary fattree ( top).,scaledwidth=48.0% ] * data center topologies : * as aforementioned , our solutions can be applied to generic tree - like topologies with simple abstractions . to support our argument ,an example is shown for the widely adopted fattree topology in fig .[ fig : fattree ] .a -ary fattree topology is shown on the top , which is then abstracted as the virtual tree topology on the bottom .switches or links connected to the same set of lower layer nodes are aggregated into a single abstract switch or link ; link capacities are also aggregated .as the majority of data center traffic consists of small flows , we can assume arbitrary splitting of traffic between different vm pairs ; hence any bandwidth allocation feasible on the aggregated topology can be successfully configured on the original topology as well .other topologies feasible for adopting such abstraction include vl2 and other multi - rooted tree - based topologies .survivable vce for more general data center topologies is among our future directions .in this paper , we studied survivable vc embedding with hose model bandwidth guarantee .we formally defined the problem of minimizing vm consumption for providing survivability guarantee . to solve the problem , we proposed a novel dynamic programming - based algorithm , with worst - case time complexity polynomial in the network size and the number of requested vms .we proved the optimality of our algorithm and analyzed its time complexity .we also proposed an efficient heuristic algorithm , which is several orders faster than the optimal algorithm .simulation results show that both proposed algorithms can achieve much higher acceptance ratio compared to the baseline solution in the online scenario , and our heuristic algorithm can achieve similar performance as the optimal with much faster computational speed .p. yalagandula , s. nath , h. yu , p.b. gibbons , and s. seshan , `` beyond availability : towards a deeper understanding of machine failure characteristics in large distributed systems , '' in _ usenix worlds _ , 2004 .yeow , c. westphal , and u. c. kozat , `` designing and embedding reliable virtual infrastructures , '' in _ acm visa _ , 2010 .w. zhang , g. xue , j. tang , and k. thulasiraman , `` faster algorithms for construction of recovery trees enhancing qop and qos , '' _ ieee / acm trans ._ , 16(3 ) : 642655 , 2008 .q. zhang , m. f. zhani , m. jabri , and r. boutaba , `` venice : reliable virtual data center embedding in clouds , '' in _ ieee infocom _ , 2014 . preliminarily , according to eq ., a subtree offering at most working vms can reduce its working vms without increasing bandwidth on its out - bound link , and since such subtree only has child offering at most working vms , such reduction does not increase link bandwidth within the subtree as well .first , as can offer vms , its bandwidth allocation satisfies . hence offering or less working vms will not increase bandwidth on s out - bound link .if is a pm , then we can directly reduce the number of working vms on from to . then , assume the lemma holds for all nodes lower than level in the tree , and let be a switch on level .if for any child of , the number of working vms that it offers , then we can reduce the number of working vms on any pm in without increasing bandwidth needed on any link . now if some child of has working vms . since , there is at most one such child .first , we reduce working vms in all other child subtrees to none .then , by induction from the -th level , the number of working vms on can be reduced from to , which is no less than , without increasing bandwidth on any link .now , since , we can reduce the number of working vms in without increasing bandwidth on any link , which completes the proof . to start with, this always holds for any pm node by simply reducing the number of vms .if subtree can offer more than working vms , each child subtree of is one of the three cases : 1 ) can offer more than working vms , 2 ) can offer working vms in ] is dependent on the four possible values computed in the table as in eq . .based on definition , if ] is correct in applying the bandwidth bound of based on eq . .as for ] and ] respectively , where is the -th child of node .the corresponding optimal vm allocation to the problem of ] and ] is not optimal .this means that \le n_v ' [ n_0^ * , n_1^ * , k-1 ] + n_{u_k } [ n_0^ { * * } , n_1^{**}] ] or ] or ] . now contains vms , as both the first components and the -th component contain less than vms . for node where , let be the number of vms added before , then its out - bound link has bandwidth allocation .hence each link is sufficient to offer all vms in the first components , for .then we form a new tree by pruning all components after , and assigning as the root of .every link in is either within some component , or one of the links , where , hence its bandwidth is sufficient as above .now , can offer working vms , hence by lemma [ l:2 ] it can offer exactly working vms . therefore , there is a feasible vce for in the given vce of in any scenario , and the lemma follows .
|
cloud computing has emerged as a powerful and elastic platform for internet service hosting , yet it also draws concerns of the unpredictable performance of cloud - based services due to network congestion . to offer predictable performance , the virtual cluster abstraction of cloud services has been proposed , which enables allocation and performance isolation regarding both computing resources and network bandwidth in a simplified virtual network model . one issue arisen in virtual cluster allocation is the survivability of tenant services against physical failures . existing works have studied virtual cluster backup provisioning with fixed primary embeddings , but have not considered the impact of primary embeddings on backup resource consumption . to address this issue , in this paper we study how to embed virtual clusters survivably in the cloud data center , by jointly optimizing primary and backup embeddings of the virtual clusters . we formally define the survivable virtual cluster embedding problem . we then propose a novel algorithm , which computes the most resource - efficient embedding given a tenant request . since the optimal algorithm has high time complexity , we further propose a faster heuristic algorithm , which is several orders faster than the optimal solution , yet able to achieve similar performance . besides theoretical analysis , we evaluate our algorithms via extensive simulations . = 1 virtual cluster , survivability , bandwidth guarantee
|
microtubules exhibit an intrinsic property whereby they switch between states of growth and shrinkage constantly . in the growing state , tubulin dimeric units with tubulin carrying rapidly hydrolyzable gtp are added to the tip , thereby increasing the microtubule length . since, microtubule lattice with gtp -bound tubulin is more stable than the one with gdp ( guanosine diphosphate)-bound tubulin , hydrolysis leaves the microtubule unstable and eventually causes depolymerization of polymer . in the depolymerizing state ,the hydrolyzed dimeric units are lost from the tip and results in shrinkage of microtubule .thus , a given microtubule in a population would appear to be in either growing state or shrinking state , with alternate transitions between these two states .a third state called pause " has been observed both _ in vivo _ and _ in vitro _, where a microtubule neither grows or shrinks .the gtp cap theory successfully explained the stochastic nature of microtubule dynamics , according to which a growing filament is characterized by the presence of a cap of gtp tubulin at its tip .the filament will keep growing as long as this cap is intact , even if most of the tubulin in the interior is in gdp bound , hydrolyzed state . upon the loss of this temporary gtp cap by spontaneous and irreversible hydrolysis ,the gdp rich region becomes exposed and the polymer undergoes depolymerization .this transition from a growing state to shrinking state is known as _catastrophe_. the reverse could happen , when the cap reappears at the tip either by addition of gtp - bound dimers or by exposure of a gtp remnant within , which leads to a transition from shrinking state to growing state :this is known as _rescue_. this collective dynamic behavior of a microtubule , consisting of alternating catastrophe and rescue events is known as _ dynamic instability _ .a thorough understanding of dynamic instability , which is the highlight of microtubule dynamics , is a key to understand microtubule dependent functions in biological systems .the gtp cap theory itself does not specify the structure of the cap itself . as the microtubule itself consists of a number ( usually 13 ) of protofilaments arranged in parallel , it is conceivable that some of the protofilaments will be gtp - tipped , while some will not be .it is a matter of debate as to how many protofilaments are required to be gtp - tipped so as to define a cap ; all 13 or less ? and even in the former case, the gtp region at the tip will have , in general , variable lengths across different protofilaments .how do we define the length of the cap , in such a situation ? _ in vitro _experimental studies performed using the slowly hydrolyzable gtp analogue gmpcpp by caplow et al . have estimated the size of the gtp cap necessary to stabilize the polymer and it has been shown that a single layer of gtp tubulin at the tip is sufficient . later on , the experimental observations at nanoscale resolution by schek et al .suggest that the cap consists of multilayer of gtp tubulin with an exponentially distributed multilayer gtp cap .but _ in vivo _ , the exact definition of cap that explains whether it requires capping of all the thirteen protofilaments or not remains uncertain .therefore , a more quantitatively precise characterization of the gtp cap theory and its implications for microtubule dynamics requires development of physical and mathematical models based on it , and comparison of the predictions of such models with experimental observations .a large variety of approaches have been adopted to tackle the problem of microtubule dynamics based on the gtp cap theory , which differ essentially in the level of molecular details included in the respective models . as recently discussed by margolin et al . , the results from all models , by parameter tuning , appear to agree with available experimental observations irrespective of the details of microtubule structure included , and irrespective of the differences in mathematical expressions for ( primarily ) the frequency of catastrophe or related quantities .we feel that a comparative study of at least some of the models , from a common starting point , is desirable , and this is one of the objectives of this paper .we chose the complete stochastic model , first proposed by flyvbjerg et al . as this starting point , as this model appears to be fairly successful in reproducing many ob served features of microtubule catastrophe , from a purely kinetic point of view. however , almost all the mathematical models , including the original fhl model , are effective one - dimensional models where the microtubule is approximated to a linear polymer .they also differ in some important details : whereas some models neglect rescue altogether , some others ignore vectorial hydrolysis ; dissociation of tubulin dimers from the filament in the growing state is included in some models , but not in others . as a result ,the predictions are apparently different even across models which share a lot of similarities in their basic assumptions ; therefore , we feel that a fresh study of the stochastic model is not untimely .flyvbjerg , holy and leibler ( henceforth referred to as fhl ) formulated an effective stochastic model of microtubule catastrophes , based on the gtp cap theory . in this model ,gtp bound tubulin polymerizes to form a one dimensional filament , and may undergo hydrolysis anywhere in the polymer .two types of hydrolysis processes were considered , spontaneous and induced ( vectorial ) . while the former occurs in a gtp - bound tubulin independent of the nucleotide state of the neighbors , in the latter case , a gdp tubulin was assumed to enhance the rate of hydrolysis in a gtp - neighbor .the model made a number of predictions which compared favorably with experimental observations .in particular , the model predicted that the catastrophe frequency is a decreasing function of the growth velocity , and asymptotically approaches a small , constant value as .the latter feature was a surprising , even counter - intuitive prediction , which we will show later to be an art ifact of the continuum theory used by fhl .the fhl model inspired a number of later studies , which attempted to go beyond its limitations , notably the papers by antal et al . and padinhateeri et al .while the latter focused on frequencies of catastrophe and rescue , the former also studied statistical characteristics such as the length distributions of the gtp cap as well as the interior islands of t - mers and d - mers .similar studies have also been carried out in the context of actin filaments .we shall also give a comparison of our results with the studies by antal et al . and padinhateeri et al . in one of the later sections .the work on this paper was started with the objective of extending the fhl model in such a way so as to base it on the dynamics of individual protofilaments , and therefore to formulate it as a complete three - dimensional model of catastrophe .in doing so , we are also freed from the necessity of taking a spatially continuum approach in describing the hydrolysis process : as each protofilament is a single polymer with monomer molecules arranged linearly , a one - dimensional discrete formalism is easily implemented for the dynamics of each . in this way , local catastrophe events are defined for each protofilament ( corresponding to the loss of the gtp - tip for each ) the frequency of which is determined precisely under steady state conditions .the global / microtubule catastrophe is defined as an event that pertains to the entire microtubule , and whose onset was defined phenomenologically in terms of the number of individual protofilaments that underwent local catastrophes .this approach gives us enough flexibility with the definition of global catastrophes , so that its dynamics ( the time scale of which is much larger than the local catastrophes in each protofilament ) as well as the steady state value could be studied in detail , and compared with experiments .indeed , several experiments have highlighted the age dependence of microtubule catastrophe frequency ; in general , following nucleation or rescue , catastrophe frequency is found to increase with time and saturate at a steady state value . by comparing results from our numerical simulations with recent experimental observations , we predict that the microtubule catastrophe requires at least 2 - 3 protofilaments to have lost their gtp tips .a single protofilament in our model is a linear polymer of tubulin molecules , starting with one gtp - bound tubulin ( symbolically denoted ` t ' , and henceforth referred to as a t - mer ) .further incoming t - mers are added to the protofilament at a rate if it is t - tipped , and at a rate if it is d - tipped .the rate of spontaneous hydrolysis is denoted and the rate of vectorial / induced hydrolysis is denoted .this means that in a .... ttt ... configuration , the middle t becomes d at a rate , whereas in a .... dtt .. or ...dtd ... configuration , the middle t becomes d at a rate .we may further allow for the possibility that a t - mer may detach from the tip at a rate .all these possible transitions with respective rates are schematically shown in fig 1 .when the last t with a d neighbour is also lost by hydrolysis or detachment from the tip , we define the protofilament to undergo a ` local catastrophe ' . the rate at which the transition fr om growing phase to shrinking phase occurs is denoted by , which we refer to as the frequency of ` protofilament catastrophe ' .analogously , may be defined as the ` protofilament rescue ' , which we regard as a time independent constant throughout this work . of these parameters , the rates and are independent parameters which may be regarded as constants , while needs to be computed in terms of these , and is , in general , a time dependent quantity .the complete microtubule may be imagined as a set of 13 such protofilaments arranged side by side ( fig 2 ) .our approach in this paper is essentially kinetic in nature , and we do not propose to undertake a detailed treatment of the energetics in the problem , which has been carried out by several authors .for this reason , we do not consider explicitly the energy of interaction between protofilaments or the bending energy of individual protofilaments .hence the cylindrical structure of the entire microtubule filament is irrelevant in our model , where the microtubule ` lattice ' has a flat geometry , similar to the model studied in ref . . in order to define a catastrophe event for the entire filament , we adopt the following phenomenological definition .we conjecture that when a certain minimum number of individual protofilaments have undergone their individual catastrophes , the entire filament becomes energetically unstable and enters the shrinking phase .therefore , this ` first passage ' event where protofilaments have already undergone catastrophes by time ( and not rescued until the time instant ) while the one undergoes catastrophe at time defines a catastrophe event for the filament at time , and denote it by .one of our objectives of this work is to estimate the number of protofilaments that needs to undergo local catastrophe to produce a catastrophe of the filament .let the probability that a protofilament has a gtp cap consisting of t - mers at time . evolves in time either by addition or loss of monomers .monomers are added to the tip of the cap at a rate .the size of the cap becomes smaller by hydrolysis of gtp to gdp at the inner boundary at rate , or anywhere within the cap at rate or by detachment of t - mer from the tip at rate . since the effect of detachment process is the same as that of vectorial hydrolysis , we may simply effect a replacement in the equations to take this into account .we assume next that a protofilament in which the cap length has shrunk to zero would undergo rescue at a rate , and that the rescued protofilaments continue to grow by successive addition of monomers , until it encounters the next catastrophe event .hence , to begin with , we consider an ensemble of protofilaments such that , at , a protofilament possess a cap with subunits .the cap dynamics can be summarized into the following master equation : the distribution is normalized : at all times , consistent with the above equation . the third and fourth term containing in the equation for ( )can be explained respectively as follows .protofilaments with t - mers at the tip can switch to a state with tip consisting of less than t - mers caused by the spontaneous hydrolysis , which cuts the gtp cap in to two regions of t - mers separated by a d - mer . by the reverse process , a protofilament of cap length can be generated from one with a cap of length larger than . in this waythe spontaneous hydrolysis accelerates the stochastic switching between growing and shrinking states .we now define a catastrophe event for a protofilament , and derive an expression for the frequency of occurrence of the same . in conformity with the definition given in the last section ,we define protofilament catastrophe frequency as where the numerator gives the fraction of protofilaments undergoing catastrophe in the interval $ ] while the denominator gives the fraction that is in growing state ( i.e. , with a gtp - tip of non - zero length ) at time .the upper bound in eq.[eq : eq2 ] follows from normalization .the occurrence of the non - local terms in the sums in eq.[eq : eq1 ] means that finding a general solution to this equation is likely to be cumbersome .however , we note that when , the set of equations in eq.[eq : eq1 ] describes a ( discrete ) random walk in one dimension with a boundary condition at , which can be exactly solved .it is therefore logical to use a perturbative method , where the distribution is expanded in powers of .also , it can be shown by scaling arguments ( see subsection e later ) that in the asymptotic regime , the leading term in the expression for is indeed the first order perturbation term in .therefore , we now start with the expansion , which describes dynamics without spontaneous hydrolysis , satisfies the following equations : while satisfies the equations + k_g[p_{n-1}^{(1)}(t)\nonumber\\-p_{n}^{(1)}(t ) ] -n p_{n}^{(0)}(t)+\sum_{m = n+1}^{\infty } p_{m}^{(0)}(t ) , n\geq2 \nonumber\\ \frac{dp_1^{(1)}(t)}{dt } = k_h[p_{2}^{(1)}(t)-p_{1}^{(1)}(t)]- k_g p_{1}^{(1)}(t)\nonumber \\+\nu_r^{\prime } p_0^{(1)}(t)-p_{1}^{(0)}(t)+\sum_{m=2}^{\infty } p_{m}^{(0)}(t)\nonumber\\ \frac{dp_0^{(1)}(t)}{dt } = k_hp_{1}^{(1)}(t)-\nu_r^{\prime } p_0^{(1)}(t)+\sum_{m=1}^{\infty } p_{m}^{(0)}(t ) .\label{eq : eq5}\end{aligned}\ ] ] further , normalization needs to be satisfied for all , which requires that while for . in order to solve for , we use the generating function , defined as with the inversion formula in the above expression , the contour is taken as a circle of infinitesimal radius centered at the origin .the generating function itself has the perturbation theory expansion .naturally , the inversion formula in eq.[eq : eq7 ] also applies to each order in : using the power - series expansion in eq.[eq : eq3 ] , the catastrophe frequency in eq.[eq : eq2 ] may be expanded in the form in order to calculate the protofilament catastrophe using eq.[eq : eq9 ] , we first solve eq.4 - 5 by defining the laplace transforms .the general expressions for with ( relevant to eq.[eq : eq9 ] ) as well the relations between them are given in the appendix .we will first calculate the protofilament catastrophes in the steady state limit under different regimes , which are conveniently classified as below : when the protofilament rescue rate , the system reaches a complete steady state , where all the probabilities become independent of time in the long - time limit , and so does the catastrophe . here , we study the cases and separately ( a demarcation warranted by perturbation theory , but seemingly artificial , since numerical solution of the master equation shows that the catastrophe frequency varies continuously across , see fig.[fig : fig2 ] later ) .when , on the other hand , the state becomes absorbing and hence none of the probabilities has a non - zero steady state value .however , even in this case , the catastrophe frequency is found to have a well - defined non - zero steady state value , which is different from the previous case .these cases are treated in subsections a and b respectively .finally , it is seen that the special case can be solved exactly for both and , see subsection d. in order to find the steady state value of , the steady state values of all dynamical quantities appearing in eq.9 are found in the limit .we denote the steady state values of and by the same symbols , but without the -dependence . using laplace transforms ,these limits may be defined as .if the limit turns out to be zero , the steady state value is zero ( i.e. , the corresponding dynamical quantity vanishes as and a more careful treatment will be needed to understand the behavior , as is required when ) . given that calculations involved are somewhat lengthy , we only give a summary of our final results in the main text , while the mathematical details are presented in the supplemental material to the paper .the steady state protofilament catastrophe frequency takes the form interestingly , we note that the expression in eq.[eq : eq13 ] differs from the corresponding expression for the microtubule catastrophe frequency in the one - dimensional effective continuum fhl model , which ( in our notation ) is given by where is the spontaneous hydrolysis rate per unit length . a brief derivation of the above result , under perturbation theory, is given in the supplemental material .asymptotically , while the continuum model predicts that as , the discrete model predicts that vanishes in this limit as .both the continuum and discrete expressions diverge at , however , this divergence is not real . a scaling argument ( see later ) shows that , as we approach the point , the higher order terms in begin to be important , and are also possibly divergent as .if such terms have alternating signs , the complete function may well be convergent and well - behaved at the point .indeed , numerical solution confirm this argument .we next consider the case of small growth velocities . in this case ,detailed calculations show that the steady state catastrophe frequency takes the form unlike the previous case , here , the catastrophe frequency has a non - vanishing zeroth order term , which decreases linearly with and vanishes at . the first order term ,however , again diverges at ( which should be interpreted with the same reservations as previously ) .it is interesting to note that is dependent on the protofilament rescue frequency , and as , the following limiting value is reached : as , , which is its maximum value ( see eq.[eq : eq2 ] , and also eq.[eq : eq23 ] , later ) . here, we consider the situation where the protofilament rescue rate is strictly zero .this ` non - steady state ' situation is the case studied in many theoretical models , including .the case of zero rescue needs to be treated with caution , as , strictly speaking , the only steady state possible is and for .however , careful calculations using the perturbative approach shows that the catastrophe frequency does reach a steady state , and the expression turns out to be note that eq.[eq : eq17 ] is different from the expression in eq.[eq : eq13 ] , and its asymptotic form is as .the singularity at is again an artifact of the perturbation theory .the general solution for turned out to require a lengthy calculation , and hence was not pursued ; rather , we found it instructive to look at the extreme case of vanishing growth rate , i.e. , .when , if the initial condition is for and , the condition of zero growth then guarantees that for at all times . we may further assume that the steady state is independent of the initial value ; therefore , it may be obtained by solving eq.[eq : eq1 ] exactly with a small value of ; here we chose .the exact time - dependent solutions for the relevant probabilities in this case are given as which yields , using eq.[eq : eq2 ] , exactly .in fact , this technique may be used for the case also , at the point . in this case , we arrive at the following exact steady state expressions for and , with the same initial condition : which then yield the exact result thus , the extremal value for is the same for and , but their asymptotic behavior at large differ by a factor of 2 . the same result given by eq.[eq : eq23 ] can be obtained for etc , but the calculations become lengthier as increases . finally , we consider the case where the term is absent from eq.[eq : eq1 ] ; this implies that there is no vectorial hydrolysis in the model , and no dissociation of t - mers from the protofilament prior to catastrophe .this case has been studied by several authors in recent times ( note that in ref . , vectorial hydrolysis is deemed absent , but t - mer dissociation is included ) .in this situation , it is possible to obtain exact solutions to eq.[eq : eq1 ] , and hence can be computed for arbitrary .as earlier , we consider the cases and separately .a steady state is possible in this case , as can be obviously verified by putting the time derivatives in eq.[eq : eq1 ] to zero .the steady state values of and then turn out to be upon substituting eq.[eq : eq24 ] into eq.[eq : eq2 ] , we find that unlike all other cases studied so far , the steady state catastrophe rate is in the perturbation series , and vanishes as as .one can not fail to note the surprising similarity with the asymptotic decay of the expression in eq.[eq : eq13 ] ; the expressions become identical when is replaced with in eq.[eq : eq13 ] . in this case , as , while all for .therefore , we need to consider the time evolution of the probabilities explicitly . with initial conditions and , the time - dependence of and , as found from eq.[eq : eq1 ] are given by , \label{eq : eq26}\ ] ] which , after substitution in eq.[eq : eq2 ] gives as the long - time limit of the catastrophe frequency . as we noticed in the comparison between eq.[eq : eq25 ] and eq.[eq : eq13 ] , the asymptotic value is similar to that of eq.[eq : eq17 ] , when the replacement is done .although the steady state expressions for in eq.[eq : eq13 ] or eq.[eq : eq25 ] do not depend on , they are , nevertheless , different from the corresponding expressions in eq.[eq : eq17 ] and eq.[eq : eq27 ] . to our knowledge, this difference has not been noted earlier .the results from perturbation theory discussed so far are significant in another way ; scaling arguments show that the leading steady state term in the perturbation theory expansion of also gives the leading asymptotic behavior of in the limit .we first note that , in eq.[eq : eq1 ] , it is possible to define a dimensionless time by dividing the entire equation by , whence , while , where the rate could be or .we then expect that the catastrophe frequency has the scaling form and according to our original assumption , we expect that the scaling function admits a power - series expansion of the form from eq.[eq : eq13 ] and eq.[eq : eq17 ] we observe that , when , with , the first term in the above equation vanishes .we also observe that the first derivative term is singular at but well - behaved otherwise ( in particular , when ) , and we may expect that this is true for the subsequent derivatives too ( considering the upper bound in eq.[eq : eq2 ] ) . therefore , in the limit , the expression in eq.[eq : eqxx1 ] takes the form , \label{eq : eqxx3}\ ] ] which means that the term is the leading asymptotic term . now, what happens if or strictly ? in the first case , the asymptotic structure of eq.[eq : eqxx3 ] appears to be retained , but with a different function in eq.[eq : eqxx2 ] ( compare eq.[eq : eq13 ] and eq.[eq : eq17 ] ) .however , if , the first derivative term vanishes , and the second derivative term becomes the leading term and hence ( eq.[eq : eq25 ] and eq.[eq : eq27 ] ) .the preceding analysis shows that some caution is required when experimental data is used to infer about the existence or non - existence of vectorial hydrolysis .the asymptotic behavior of in the large limit is similar in both cases , and can not be used to distinguish between them .further , a nonzero rate of dissociation of t - mers from the protofilament has an effect identical to that of vectorial hydrolysis . as a second important observation, we note that protofilament rescue events ( here , the incorporation of a t - mer to a d - tipped protofilament ) are important in determining the frequency of catastrophe . our analysis has shown that , in all cases , is independent of the precise value of , but depends on whether is strictly zero or not .we now combine the results in eq.[eq : eq13 ] , eq.[eq : eq17 ] , eq.[eq : eq25 ] , eq.[eq : eq27 ] with the scaling argument in eq.[eq : eqxx3 ] to arrive at the following universal asymptotic form for the protofilament catastrophe in the limit : for easy reference , the values taken by the constant in different situations are summarized in table [ tab : tab1 ] ..the table lists the value taken by the parameter ( eq.[eq : eqxx4 ] ) under various conditions on and . [ cols="^,^,^",options="header " , ]in the present paper , we have studied the stochastic model of gtp cap dynamics , introduced by flyvbjerg , holy and leibler ( fhl ) . both spontaneous and vectorialhydrolysis have been included in the model , partly because the effect of the latter in the associated dynamical equation is the same as that of a dissociation term for a filament - incorporated gtp - dimer .unlike the fhl study , we employ a discrete formalism here , which is more appropriate for individual protofilaments .we do not make use of an effective one - dimensional picture of a microtubule unlike many previous authors ; rather , we define events of catastrophe and rescue for each protofilament , which are then related to microtubule catastrophe via first passage concepts .the protofilament catastrophe and rescue events are defined analogous to the corresponding events in an entire microtubule filament ; catastrophe here refers to the loss of a gtp - dimer tip in the protofilament , wherea s rescue refers to the addition of a gtp - dimer to a gdp - tipped protofilament .we find that even at the level of a single protofilament ( i.e. , a one - dimensional filament ) , the predictions of the model , in general , differ from the predictions of the corresponding continuum model of fhl .we also considered both steady state ( protofilament rescue present ) and non - steady state ( protofilament rescue absent ) situations ; the distinction between these has never been clearly addressed in the literature ( for example , in the fhl model , the loss of gtp cap is an absorbing state which can not be rescued whereas in the models used in , a gtp - dimer may attach to a gdp - tipped filament with non - zero probability and ` rescue ' it ) .we show rigorously that while catastrophe frequency in the stochastic model is independent of the protofilament rescue rate , it depends , nevertheless , on whether steady state or non - steady state conditions are employed . in spite of these differences ,we show that the asymptotic behavior of protofilament catastrophe in the limit of large values of protofilament growth rate is simply , where the proportionality constant depends on the specific conditions used .this remarkable universal property perhaps partly explains why predictions of many different models have been found to fit well with available experimental data ( e.g. , ) .we also compared predictions of our model with the exact results in antal et al . , as well as mean - field model of padinhateeri et al . , and found that the mathematical results of the models agree in the asymptotic regime discussed above , under appropriate conditions .a comparison of our estimate of , with those arrived at by other authors is illuminating .we observe that while there is a reasonable agreement between the values predicted by purely kinetic studies ( except ref . ) , they differ significantly from models where the energetics of the microtubule filament is explicitly included , which typically predict much higher values of . however , since both types of models are seemingly able to demonstrate agreement with experiments , it would be interesting to know if the rate used in the kinetic models should be regarded as an effective parameter which hides some information about the energetics of binding between tubulin dimers , both within a protofilament and between protofilaments .we hope that our study will stimulate further investigations in this direction .our model also predicts that 2 - 3 protofilaments out of thirteen are required to lose the gtp cap to initialize a catastrophe event , based on an analysis of the ( microtubule ) age dependence of catastrophes , observed in experiments .it is likely that once such two shrinking protofilaments come side by side of a growing protofilament , this configuration can destabilize the middle one by the breaking lateral bonds , in this way making the lattice more unstable and finally forcing the entire microtubule into a shrinking state .this prediction agrees with the conclusions of other authors who have suggested , based on phenomenological arguments motivated by experimental data , that catastrophe is a multi - step process . in our view , the number of such steps precisely equals the number of protofilaments which need to become gdp - tipped in order for the microtubule catastrophe to be initiated .it is tempting to interpret this result in terms of the lateral bond energy o f protofilaments ; however , it must be borne in mind that the model does not require that these gdp - tipped protofilaments need to be adjacent to each other .clearly , this issue remains far from understood . also , the implications of the age dependence of microtubule catastrophes , both _ in vitro _ and _ in vivo _ is another aspect of the problem which seems worthy of further investigation . to summarize , the present model which treats a microtubule as 13 independent protofilaments , is fairly successful in predicting the time evolution and steady states of microtubule catastrophe frequency , for a range of growth rates .the highlight of this model is that it enables calculation of microtubule dynamic parameters starting from the dynamics of individual protofilaments , which , being a strictly one - dimensional problem , is more amenable to analytical treatments .this is not to argue that inter - protofilament interactions , neglected in the present model but included in several other studies , are unimportant ; it is just that the limited experimental data available at the moment does not seem to be sufficient to make a clear - cut distinction between these two broad categories of models .the lateral bonds are also likely to play an important role in the process of rescue , which seems much less sensitive to tubulin concentration compared to catastrophe ( see , e.g , ) , and much less understood from a modeling point of view . in _ in vivo _situations , microtubules grow in a confined environment and typically encounter obstacles to growth in the form of rigid or flexible barriers . in the context of chromosome capture and subsequent spindle formation during the mitotic phase , it is well - known that microtubules exert forces both when polymerizing and depolymerizing .the negative effects of polymerization force , when generated against a rigid barrier , on the growth velocity of microtubules was first shown experimentally by dogterom and yurke and subsequently studied theoretically by other authors(see , e.g. , ) .experiments have also shown that the catastrophe frequency is enhanced by the proximity of the microtubule tip to a barrier , both _ in vitro _ and _ in vivo _ , which is consistent with a reduced binding rate in the presence of force ( see , two recent theoretical studies of h ow microtubule dynamic instability is affected by force and confinement ) .interestingly , the 13-protofilament model used by the authors of is similar to the model studied in the second part of this paper .it should , therefore , be possible to extend the present model in a straightforward manner to include force - dependence of , in a protofilament - specific manner . the authors would like to acknowledge many fruitful discussions with the members of the complex fluids and biological physics group , department of physics , iit madras .we also acknowledge useful correspondence with d.n .drechsel and the authors of ref . regarding the experimental data in .100 a. desai and t. mitchison , annu .* 13 * , 83 ( 1997 ) .t. mitchison and m. kirschner , nature * 312 * , 232 ( 1984 ) .m. gardner , m. zanic and j. howard , curr .cell biol .* 25 * , 1 ( 2012 ) .caplow m. and shanks , mol .cell * 7 * , 663 ( 1996 ) .h. t. schek iii , m. k. gardner , j. cheng , d. j. odde and a. j. hunt , curr .* 17 * , 1445 ( 2007 ) . t. l. hill and y. chen , proc .usa * 81 * , 5772 ( 1984 ) .h. flyvbjerg , t. e. holy , and s. leibler , phys .* 73 * , 2372 ( 1994 ) ; phys .e * 54 * , 5538 ( 1996 ) .g. margolin , i. v. gregoretti , h. v. goodson , and m. s. alber , phys .e * 74 * , 041920 ( 2006 ) .t. antal , p. l. krapivsky , s. redner , m. mailman , and b. chakraborty , phys .e * 76 * , 041907 ( 2007 ) .r. padinhateeri , a. b. kolomeisky , and d. lacoste , biophys .j. * 102 * , 1274 ( 2012 ) .v. vanburen , d. j. odde , and l. cassimeris , proc .usa * 99 * , 6035 ( 2002 ) ; v. vanburen , l. cassimeris , and d. j. odde , biophys .j. * 89 * , 2911 ( 2005 ) .m. i. molodtsov , e. l. grishchuk , a. k. efremov , j. r. mcintosh , and f. i. ataullakhanov , proc .usa * 102 * , 4353 ( 2005 ) .b. m. piette et al . , plos one 4 , e6378 ( 2009 ) .l. brun , b. rupp , j. j. ward , and f. ndlec , proc .usa * 106 * , 21173 ( 2009 ) .g. margolin et al . ,cell * 23 * , 642 ( 2012 ) .x. li , r. lipowsky , and j. kierfeld , europhys. lett . * 89 * , 38010 ( 2010 ) .p. ranjith , k. mallick , j -f .joanny , and d. lacoste , biophys .j. * 98 * , 1418 ( 2010 ) .see supplemental material at [ url will be inserted by publisher ] for details of the calculations .d. t. gillespie , j. phys .chem . * 81 * , 2340 ( 1977 ) .wolfram research , inc ., mathematica , version 7.0 , champaign , il ( 2008 ) . in principle, rescue could also occur due to a -island inside the filament getting ` exposed ' after dissociation of all the preceding -mers ; in our work , we disregard this possibility like most other authors , see , however , the discussions in . d. n. drechsel , a. a. hyman , m. h. cobb , and m. w. kirschner , mol .cell * 3 * , 1141 ( 1992 ) .h. bowne - anderson , m. zanic , m. kauer , and j. howard , bioessays * 35 * , 452 ( 2013 ) .d. j. odde , l. cassimeris , and h. m. buettner , biophys .j. * 69 * , 796 ( 1995 ) .t. stepanova et al . , curr .* 20 * , 1023 ( 2010 ) .m. k. gardner , m. zanic , c. gell , v. bormuth , and j. howard , cell * 147 * , 1092 ( 2011 ) .r. a. walker , e. t. brien , n. k. pryer , m. f. soboeiro , w. a. voter , h. p. erickson , and e. d. salmon , j. cell biol . * 107 * , 1437 ( 1988 ) .m. e. janson , m. e. de dood and m. dogterom , j. cell biol . * 161 * , 1029 ( 2003 ) .m. dogterom and b. yurke , science * 278 * , 856 ( 1997 ) .a. mogilner and g. oster , eur .j. * 28 * , 235 ( 1999 ) .g. s. van doorn , c. tanase , b. m. mulder , and m. dogterom , eur .j. * 29 * , 2 ( 2000 ) .a. b. kolomeisky and m. e. fisher , biophys . j. * 80 * , 149 ( 2001 ) .j. krawczyk and j. kierfeld , europhys . lett .* 93 * , 28006 ( 2011 ) .d. r. drummond and r.a .cross , curr .* 10 * , 766 ( 2000 ) ; a. komarova , i. a. vorobjev , and g. g. borisy , j. cell sci . *115 * , 3527 ( 2002 ) ; d. foethke , t. makushok , d. brunner , and f. ndlec , mol .syst . biol . * 5 * , 241 ( 2009 ) ; c. tischer , d. brunner , and m. dogterom , mol . syst. biol . * 5 * , 250 ( 2009 ) .y. zhang , j. biol .chem . * 286 * , 39439 ( 2011 ) .b. zelinski , n. mller , and j. kierfeld , phys .e * 86 * , 041918 ( 2012 ) ._ zeroth order terms : _since we perform a perturbative expansion of probabilities in , and also is very small it is possible to retain terms up to first order in . in the calculations , we determine and independently , while the other required probabilities , especially and in steady state are determined using the former .the dynamical equation for is given by \phi^{(0)}(z , t)\nonumber \\-[k_h(\frac{1-z}{z})+(\nu_r^{\prime}-k_g)(1-z)]p_0^{(0)}(t ) .\label{eq : eqa1}\end{aligned}\ ] ] on solving equation eq.[eq : eqa1 ] using laplace transform technique with the initial condition that we get , (1-z)\tilde p_0^{(0)}(s)]}{[s - k_g(z-1)-k_h(\frac{(1-z)}{z } ) ] } \label{eq : eqa2}\ ] ] (1-z)\tilde p_0^{(0)}(s)-z^{n+1}}{k_g(z - z_1)(z - z_2 ) } , \label{eq : eqa3}\ ] ] where the constants and are given by \phi^{(1)}(z , t)-\frac{1}{z}[k_h+ ( \nu_r^{\prime}-k_g ) z](1-z)p_0^{(1)}(t)\\ -z \frac{\partial \phi^{(0)}(z , t)}{\partial z } + \sum_{i=0}^{\infty}z^i\sum_{m = i+1}^{\infty } p_{m}^{(0)}(t ) .~~~~~~~~~~~~~~~~~\nonumber \\ \label{eq : eqa9}\end{aligned}\ ] ] \bigg[\frac{(1-z_1^n)}{k_g^2(z_2-z_1)(1-z_1)^2}\nonumber \\-\frac{nz_1^n}{k_g^2(z_2-z_1)(1-z_1)}+\frac{z_1^{n+1}(k_h+(\nu_r^{\prime}-k_g)z_2)}{k_g^2(k_h+(\nu_r^{\prime}-k_g)z_1)(1-z_1)(z_2-z_1)^2}+ \frac{1}{k_g^2(z_2-z_1)(z_2 - 1)(1-z_1)}-\nonumber \\ -\frac{z_1^{n+1}}{k_g^2(z_2-z_1)^2(1-z_1)}-\frac{z_1^{n+2}}{k_g^2(z_1-z_2)^2(1-z_1)}-\frac{(n+1)z_1^{n+1}}{k_g^2(z_1-z_2)^2}+\frac{n(n+1)z_1^n}{2k_g^2(z_1-z_2)}+\nonumber \\ \frac{(\nu_r^{\prime}-k_g)z_1^{n+2}}{k_g^2(z_1-z_2)^2(k_h+(\nu_r^{\prime}-k_g)z_1)}+\frac{(\nu_r^{\prime}-k_g)z_1^{n+2}}{k_g^2(z_1-z_2)(1-z_1)(k_h+(\nu_r^{\prime}-k_g)z_1)}\bigg].\nonumber \\ \label{eq : eqb2}\end{aligned}\ ] ]
|
the disappearance of the guanosine triphosphate ( gtp)-tubulin cap is widely believed to be the forerunner event for the growth - shrinkage transition ( ` catastrophe ' ) in microtubule filaments in eukaryotic cells . we study a discrete version of a stochastic model of the gtp cap dynamics , originally proposed by flyvbjerg , holy and leibler ( flyvbjerg , holy and leibler , phys . rev . lett . * 73 * , 2372 , 1994 ) . our model includes both spontaneous and vectorial hydrolysis , as well as dissociation of a non - hydrolyzed dimer from the filament after incorporation . in the first part of the paper , we apply this model to a single protofilament of a microtubule . a catastrophe transition is defined for each protofilament , similar to the earlier one - dimensional models , the frequency of occurrence of which is then calculated under various conditions , but without explicit assumption of steady state conditions . using a perturbative approach , we show that the leading asymptotic behavior of the prot ofilament catastrophe in the limit of large growth velocities is remarkably similar across different models . in the second part of the paper , we extend our analysis to the entire filament by making a conjecture that a minimum number of such transitions are required to occur for the onset of microtubule catastrophe . the frequency of microtubule catastrophe is then determined using numerical simulations , and compared with analytical / semi - analytical estimates made under steady state / quasi - steady state assumptions respectively for the protofilament dynamics . a few relevant experimental results are analyzed in detail , and compared with predictions from the model . our results indicate that loss of gtp cap in 2 - 3 protofilaments is necessary to trigger catastrophe in a microtubule .
|
for describing evolutionary dynamics the framework of fitness landscapes has been introduced , see for instance .a fitness landscape formulates relationships between genetic specifications , individual instantiations , and their fitness .together with postulating differences in fitness over all possible genetic specifications and a moving bias towards higher fitness , the setup suggests the picture of an evolving population that is moving directedly on the landscape . on a conceptual level , this picture is based on the notion of evolutionary paths that are created by the topological features of the fitness landscape .evolutionary paths are a succession of moves on the landscape with persistently ascending fitness values .the existence and abundance of such evolutionary paths gives rise to estimates about how likely a transition from low fitness regions to high fitness regions in the landscape is .these transitions instantiate evolutionary dynamics .apart from fitness landscapes , another approach for specifying evolutionary dynamics is evolutionary games , .evolutionary games are mathematical models of dynamic interactions between individuals in a population and explain how their behavioral strategies ( for instance cooperation or competition ) spread in a population .the main question is how adoption of the strategies contributes to payoff collecting and consequently to the fitness characterizing the success of each individual .an evolutionary game becomes dynamic if it is played iteratively over several rounds and the individuals are allowed to change their strategies and/or to recast the network describing with whom they are interacting .such an iterated evolutionary game comprises of an evolving population of individuals acting as players and can be seen as an expression of evolutionary dynamics .given the fact that there are two frameworks for addressing evolutionary dynamics , it is natural to ask about their relationships .unfortunately , both frameworks are not immediately compatible .although it is acknowledged that evolutionary games cast fitness landscapes , it has become clear that such game landscapes change with an evolving population of players , .this is attributed to frequency dependent selection . in other words ,game landscapes are dynamic .based on some earlier results on dynamic fitness landscapes , e.g. , there are some first attempts at applying these ideas to games , for instance , . in this paper dynamic landscapesare employed for analyzing coevolutionary games by using and extending a framework introduced recently , .games are considered where the players may update their strategies ( evolutionary games ) , see e.g. , but also games where the players may additionally change their network of interaction ( coevolutionary games ) , see e.g. . in particular, it is shown that the proposed method makes it possible to model and analyze evolutionary games and coevolutionary games within the same framework .the paper is structured as follows . in sec .[ sec : desc ] , some basic definitions are given , and evolutionary and coevolutionary games are briefly recalled .[ sec : gamedynamic ] reviews game dynamics , particularly the processes to update strategies and networks of interaction .dynamic landscape models of coevolutionary games are introduced and discussed in sec .[ sec : land ] .the modeling procedure is demonstrated for prisoner s dilemma ( pd ) and snowdrift ( sd ) games that both use either birth death ( bd ) or death birth ( db ) strategy updating .it is further shown that bd and db updating yield landscapes with symmetry properties , and that replacement restrictions entail symmetry breaking .moreover , the local topological features of absorbing configurations of the games are interpreted as absorption structure .it is described how landscape properties may be linked to fixation via the absorption structure .[ sec : num ] reports numerical experiments on landscape measures such as modality , ruggedness and information content . fixation probabilities and fixation times are calculated as well as network measures characterizing the networks of interaction of the coevolutionary games . it is shown and discussed how the landscape measures relate to both fixation properties and network measures .[ sec : con ] closes the paper with a summary and conclusions .the coevolutionary dynamics of the games considered in this paper stems from three levels of activity : ( i ) game playing , ( ii ) updating the strategy , and ( iii ) updating the network of interaction .the game playing is done by a finite population of players that use one of two strategies during each round .a player , , can either cooperate ( ) or defect ( ) .a pairwise interaction between two players and ( which can be seen as player and coplayer ) yields rewards in form of payoff as given by the payoff matrix for player and coplayer using the same strategy , or , they both obtain the reward for mutual cooperation ( ) or the punishment for mutual defection ( ) .a mixed choice of strategy gives one of them the sucker payoff for cooperating with a defector , and the other one the temptation to defect while the coplayer is cooperating .hence , for , there is and , while for , there is and . depending on the numerical values of and their order , particular examples of the gameare obtained , which have become metaphors for studying social dilemmas and discussing strategy selection along with the effect on short and long term success in accumulating payoff , .most prominently , there are prisoner s dilemma ( pd ) games , where , and snowdrift ( sd ) games , where .the payoff of player in round depends not only on the player s strategy and the values of the payoff matrix ( [ eq : payoff ] ) , but also on who its coplayer is ( or more precisely as to what the coplayer s strategy is ) and how many coplayers there are .the question of who plays whom in a given round of the game is addressed by the network of interaction. a convenient way of expressing and visualizing the network of interaction is by using elements from evolutionary graph theory , .evolutionary graph theory places each player of the population on a vertex of an ( undirected ) graph .this graph describes the network of interaction and consequently it can be called an interaction graph . as there are no empty vertices and a vertex can only be occupied by one player , the number of vertices of the graph equals the number of players . for each player ,its coplayers are indicated by edges that connect the vertex of the player with the vertices of the coplayers .such an edge defines the connected players to be adjacent .each vertex can have up to edges ( self play is not allowed ) . as the degree is the number of edges incident with a vertex ,the degrees of the interaction graph equal the number of coplayers that are engaged with each player in a single round .a graph is called regular if the degree is the same for all vertices .hence , a regular interaction graph means that all players have the same number of coplayers .the interaction graph can be described algebraically by its ( interaction ) adjacency matrix , which is also called an interaction matrix .the matrix ^{n \times n} ] and the adjacency matrix .this setting deterministically fixes the payoff for each player . for making payoff of a player interpretable as reproduction rate or survival probability ( and lastly as fitness ) , it has been suggested to rescale by a positive , increasing , differentiable function , . in the following the linear function is used with the intensity of selection .as the game is completely determined by fixing the payoff matrix ( [ eq : payoff ] ) , the strategy vector , and the adjacency matrix , the distribution of payoff amongst the players remains the same if the players were to engage in the game with the same entities for a second time in round .put another way for these entities being constant the game can be seen as static .consequently , making the evolutionary game dynamic requires updating either the players strategies or the network of interaction , or both .there is a huge amount of work devoted to the modes of updating the player strategies in evolutionary games , .most models use versions of stochastic strategy updating based on a moran process , but there are also works emphasizing limiting the effect of randomness and including the self interest of players , e.g. .according to a moran process , in each round a player ( or more precisely its strategy ) is replaced by ( the strategy of ) a player .the players and are selected at random , but the probabilities of the selection may not be uniform , for instance depending on the players fitness , which may vary .versions of stochastic updating rules differ in several respects .differences are , for example , the actual probabilities that given players and are selected or whether or not there is an order between selecting the player providing the strategy ( the source ) and selecting the player receiving the strategy ( the target ) .finally , there may be general restrictions as to which players are allowed to be a possible source and/or target of another player .such predetermined restrictions imply a replacement structure , . conceptually similar to interaction ,the question of who may replace whom can be described by a network of replacement .this network is expressible by a replacement graph and consequently by a ( replacement ) adjacency matrix , which is called a replacement matrix .the matrix has entries , and indicates that player may provide its strategy for player to receive .the values of contribute to the probabilities that player is source and player is target . if all for a constant , every player may be the source to every target player with equal probability .consequently , if there are no restrictions , the replacement graph is fully connected with evenly weighted edges . amongst strategy updating ,the following replacement rules are frequently studied : birth death ( bd ) , death birth ( db ) , imitation ( i m ) , and pair wise comparison ( pc ) , . for all rules there may be restrictions with respect to replacement . the rules bd and db differ in the order with which source and target are selected , with bd selecting source before target and db target before source . the probability to become a source depends on the source s fitness .i m is similar to db but with the difference that the target itself can compete with other players to become a source . in pc( also known as link dynamics ) both players are selected simultaneously and the source is replaced by the target with a probability depending on the fitness difference between the players , for instance via a fermi function .hence , i m and pc share that the source can be its own target , meaning than the strategy remains the same . to summarize , all moran based updating rules depend only on random ( and possibly on players fitness and replacement restrictions ) , but not on details of the interaction ( for instance who the source or target are actually interacting with and what those strategies are ) .therefore , they do not account for self interested players , .these reasons make it possible to disentangle player and strategy in the sense that it makes no difference from which source the target receives its strategy updating . in other words , for all these updating rules it is possible to specify probabilities that the strategy of a source replaces the strategy of a target depending only on replacement matrix and fitness , . if , in addition to the strategies , also the network of interactions can be updated in evolutionary games , the game is called coevolutionary .however , the players of the coevolutionary game are functionally alike and can hence be thought as belonging to the same species .therefore , coevolution takes place within a single population of players and is between different features of the players function , that is game strategy and interaction network .such a coevolution is hence methodologically different from an alternative understanding of coevolution , which is between different ecological functions ( and hence different species ) , for instance between predator and prey , or between host and parasite , see e.g. . with the players belonging to a single population and strategy updating already addressed , coevolution in evolutionary gamesis in essence considering the network of interaction as dynamic , from which follows that the interaction matrix must be time there is a substantial variety of schemes and rules for coevolution , .these schemes can be categorized according to different criteria .a first criterion is the type of dynamics of , for which there can be three groups : ( i ) purely random updating , ( ii ) random updating with probabilities depending on fitness or current strategy or network properties , and ( iii ) deterministic updating .a second criterion is the effect which the dynamics has on graph theoretical properties of the networks , for instance , the number of edges ( is the number of links in the network constant or growing / shrinking ) , or the regularity of the graph ( do all players have always the same number of coplayers , or are there rules that allow specific players to become super connected ) , or network connectivity . finally , there is the question of time scale , that is how the cycles of strategy updating relate to the cycles of network updating , for instance if the edges have a life time depending on the number of strategy updating that the players experienced .unfortunately , the topic of network updating has not yet matured as far as to express for a given coevolutionary rule the transitions from one network of interactions to another as a probabilistic function . whereas for strategy updating , there are replacement probabilities for different updating rules , , the same is not known for network updating. however , it might be reasonable to assume that network updating involves creating an interaction matrix at point in time from a matrix at the previous point , for an integer time variable .such a succession of interaction networks can be modelled by instances of an erds rnyi graph . in this paper ,the discussion is restricted to the case where the number of coplayers is the same for all players and constant for all updating instances .employing such a model precludes situations where a more highly connected player possesses high fitness due to its connectedness , but not necessarily due to the effectiveness of its strategy . for coplayers ,such an interaction graph has degree and belongs to a special class of erds rnyi graphs , namely random graphs .modeling the interaction network by random graphs makes it possible to systematically carry out numerical experiments because recently efficient algorithms for generating such graphs became available , . moreover , for random graphs, some analytic results about the number of different graphs are known .this , in turn , corresponds to the number of possible player coplayer combinations . as a graph with vertices has edges , the number needs to be evenemploying such an interaction network model implies that we can not have an odd number of players with an odd number of coplayers. for a small number of edges ( coplayers ) , the number of different graphs on vertices ( players ) can be found by enumeration , see for instance the entries in the sloane encyclopedia of integer sequences , .thus , and , while , , and , and , .note that for all , which means that a complete network of interactions representing a well mixed population holds only one instance of the matrix .thus , for a complete network graph the game can not be coevolutionary .it is always static with respect to interaction because no dynamic changes can be cast out of a single instance of .further note that grows rapidly . for interaction networks with coplayers , the number of possible player coplayer combinations can be calculated exactly , as there is a recursive formula for the number of graphs , , p.56 : valid for , with , and . for ,no formula is known to compute exactly the total number of graphs on vertices , but asymptotic expressions have been found , .asymptotically , and for and even , the number is based on a collection of random graphs the effect of different networks of interaction on payoff collecting and fitness can be analyzed , for which a landscape approach is proposed in the next section .a general definition of a ( static ) fitness landscape is the triple , where is a configuration space , is a neighborhood structure that assigns to every a set of direct neighbors , and is a fitness function that provides every with a proprietary quantity to be interpreted as a quality information or fitness , . in this definition ,the configuration space together with the neighborhood structure expresses a ( multi dimensional ) location , while fitness is a property of this location .the configuration space itself can be seen as an unordered ( finite or infinite ) list of configurations that genetic specifications of biological systems can have .the neighborhood structure defines a ( possibly multi dimensional ) order of this list by establishing what is directly next to each element of the configuration space .as direct neighbors of an element have a neighborhood structure themselves , this naturally establishes distant neighbors of the element as well . the definition of a ( static ) landscape has the consequence of each configuration possessing a constant fitness value .for several reasons this might not realistically reflect the evolutionary scenario to be described and may generally restrict the descriptive power and versatility of the landscape model .hence , assuming that fitness may change over time , while configuration space and neighborhood structure remain constant , the definition above can be extended to a dynamic fitness landscape , which can be expressed as the quintuple , .in addition to the elements of the static landscape , there is the time set , the set of all fitness functions in time , and the transition map defining how the fitness function changes over time .it is noteworthy that for a discrete time set , for instance for the integer numbers , the notion of a dynamic landscape coincides with the notion of a series of static landscapes .hence , two static landscapes and can be reformulated as one dynamic landscape with and describing how changes into .such a dynamic landscape model implies the time variable to act as an integer counting and ordering scale for dynamic instances of a static landscape .hence , is numerically tantamount to yet conceptually different from counting the rounds of an coevolutionary game by .applying a landscape approach for describing evolutionary dynamics requires addressing what may constitute a configuration and its neighborhood , and also what defines fitness . for the coevolutionary games described in the previous sections ,there are several modeling options , which are reviewed in the following .the actual modeling choice of , and and their interdependencies may either result in a static landscape or entail a landscape that is dynamic and additionally requires , and to be specified .the simplest modeling choice is to equate configurations with players , which for players leads to a player configuration space with elements .the neighborhood structure follows from the coplayers that each player has , which can be .thus , the neighborhood of a player consists of all the other players with which it is mutually engaged in a game according to the interaction matrix .hence , assuming that each player can be attributed with a fitness , such a player landscape could be specified by .a popular form of such player landscapes is to place the players on a two dimensional square lattice and define the coplayers to be von neumann ( or moore ) neighborhoods , which consists of the lattices cells orthogonally ( or additionally diagonally adjacent ) surrounding a central cell .admittedly , such an arrangement fixes the number of direct neighbors to ( or , but yields a convenient way of visualizing the quality information over the resulting two dimensional structure , which might be one reason for the popularity of these neighborhoods .the most obvious choice of the quality information is payoff or quantities directly derived from it such as the linear fitness introduced earlier .this has led to label such landscapes as payoff landscapes , .there are , however , several problems with such a player landscape model .the main problem is that the configuration is the player , not its strategy , nor the strategies of its coplayers .hence , with the player s and coplayers strategies , two quantities decisive for the amount of payoff , are not directly attached to the configuration .strategies can be seen as ambiguous and polyvalent properties of the configuration of players .this means that the payoff attributable to a configuration depends on both the player s strategy and also on the strategies of its neighboring coplayers .this aspect is known as frequency dependence , as the payoff can be seen as to depend on how frequent the strategy that the player adopts also occurs in the coplayers .consequently , frequency dependent fitness refutes the assumption that each player can be attributed with a unique and static fitness . in short ,fitness derived from payoff can be seen as dynamic so that the real player landscape can not be static , but should be dynamic : .moreover , the dynamics of is caused not only by frequency dependence , but also by strategy updating for which the player landscape model does not directly account and both these causes can hardly be separated from each other . hence , the transition map describing how the fitness relates to is not straightforwardly definable .in addition , modeling configurations of a landscape by players means that the neighborhood structure is expressed by the adjacency matrix .a variable network of interaction , as in coevolutionary games , therefore implies a changing neighborhood structure . to conclude a player landscape of a coevolutionary game would involve changing neighborhood structure as well as dynamic fitnessthis may make analyzing such a landscape rather complicated .there is another reason for the difficulties to deduce meaningful conclusions from payoff based fitness over a player landscape .topological features of the landscape can be used as a starting point for estimating how likely transitions from low fitness configurations to high fitness configuration are and also which configurations are most likely to be a steady state of evolutionary dynamics . however , which player in a symmetric game as specified by the payoff matrix ( [ eq : payoff ] ) exactly is a likely high fitness outcome of an evolutionary process is not very relevant .a much more important question is what fraction of the players in the long run settles stably to one of the possible strategies . in pursuing this question ,there are several works that define the quality information of the landscape to be the strategy to which a player temporarily or finally settles .this means the fitness is expressed by the strategy vector .the results have been visualized by coloring the players according to their strategy , .such a model has the advantage that the spatial and temporal distribution of the strategy preferences can be visualized with respect to the player coplayer structure . however , payoff based fitness as the main drive of evolutionary game dynamics is not an explicit component of such a landscape and the number of coplayers is defined by the restrictions of the adjacency of the lattice grid .an alternative modeling choice is to equate configurations with all possible combinations of strategies that each player and its coplayers can have .an element of the strategy configuration space is comprised of the strategies of any player , , and its up to coplayers : . the strategy configuration space generalizes the time dependent strategy vector towards all of its possible instances .hence , for players with two possible strategies , contains elements .if we binary code the strategies cooperation and defection ( for instance , ) , an element appears as binary string of length .note that for this case the bit count of , , provides a simple way of expressing the fraction of cooperators .it is assumed that only one player or coplayer can change its strategy at a given point in time .this implies a neighborhood structure where each element has direct neighbors which are distanced by hamming distance of , which is denoted by .for instance , with such a model we obtain for payoff based fitness a unique and static landscape for each player and each network of interaction .as the game specified by the payoff matrix ( [ eq : payoff ] ) is symmetric , the strategy landscapes are topologically alike for all players .the landscapes can be transformed into each other by shifting and reshuffling . for a landscape interpretationthis topological likeness implies that landscape quantifiers such as the number of maxima , or correlation structure , or information content do not vary over the . for , the landscapes can be visualized in two dimensions , see fig .[ fig : n4 ] .it shows , , for the payoff matrix and the adjacency matrix specifying a pd game with a complete network of interaction and coplayers for each player .we find that and hence the game is static with respect to updating the network of interaction .observe that for each player there is only one maximum fitness value ( the player is defecting , while all coplayers cooperate ) and one minimum fitness value ( the player cooperates , while all coplayers defect ) .apart from the single maximum and the single minimum , there are several configurations that have the same fitness value .interestingly , these configurations do not form a neutral network , , as they have hamming distance of and hence are not direct neighbors . from the strategy landscape it can be concluded which strategy for the player ( with respect to the strategies of the coplayers ) yields the highest fitness and is therefore most desirable from the perspective of .nonetheless , the evolutionary path from a given initial configuration may depend on , and be influenced by , the strategies provided to and/or received from the coplayers .in addition , from the perspective of another player , another strategy configuration is best .best configurations for respective players , however , are mutually exclusive , which is a defining feature of social dilemma games such as the pd .consequently , each strategy landscape can be seen as a building block that constructs a strategy landscape of the game .such a game landscape would allow conclusions as to what strategy configurations are adopted if all players and their interactions are taken into account . in other words ,a game landscape may model the dynamics caused by the strategy updating processes discussed in sec .[ sec : statupdate ] .reconsider the game with players , for which fig .[ fig : n4 ] depicts the player wise strategy landscapes . at every point in time , the game can be seen as occupying one of its configurations .put another way , the actual strategy vector specifies an actual configuration on the landscapes . for each player , its landscape gives its actual fitness .the strategy updating process means that one player provides its strategy for another player to receive .the receiving player changes its strategy . according to the landscapeview this process corresponds with changing the actual configuration to a neighboring configuration .as the change of configuration affects all players ( and consequently all player wise strategy landscapes ) , it entails that all players may experience a change of fitness as well .no player can hold onto its configuration as long as the strategy updating process is underway unless one of the two absorbing configurations are reached , namely or . in the following , the strategy updating processes birth death ( bd ) and death birth ( db ) will be discussed .for these processes transition probabilities can be derived , , which can now be employed to define game landscapes .therefore , it will be convenient to rewrite the landscape as its decomposition , , where each contains the fitness and preserves the neighborhood of configuration .assume that the game is in configuration , which means that player is defecting , while the three other players are cooperating . according to the pd game ,the fitness of is highest , the three other players have the same ( albeit lower ) fitness .to start with , let us consider a bd strategy updating .a player s strategy is chosen at random with a probability proportional to fitness to be a source ( hence birth ) .the birth probability of a configuration of player therefore scales to where the are the decompositions of the landscape containing the fitness .the player with the highest fitness is most likely to be a source , which is presumably with strategy .which one of the three players is the target to receive the strategy ( hence death ) is due to chance but influenced by possible restrictions regarding the replacement .hence , the death probability of a player is where the are the elements of the replacement matrix possibly restricting replacements of strategies as discussed in sec .[ sec : statupdate ] . note that the death probability is independent of fitness and hence the same for all configurations of each player .a bd ( and also a db ) updating does not envisage self replacement and hence the replacement matrix must have diagonal elements . if , on the other hand , there are no replacement restrictions , then the death probability is invariant over players : for all players using a bd updating .assume that all players can be a target and is chosen .hence , the strategy configuration after the strategy updating is .the players and have leveled their fitnesses , while the fitness of both the other players is fallen even more .now consider a db strategy updating . here , a player s strategy is chosen at random with a probability proportional to the inverse of its fitness to be a target ( hence death ) .therefore , the death probability of a configuration of player can be expressed as scaling to still assume that the game is in configuration and as the players , , and have the same ( low ) fitness values , one of them is most likely to be the target .suppose is chosen .which one of the three players provides its strategy as a source ( hence birth ) , depends on chance and possible replacement restrictions .we get the birth probability which is the same as the death probability ( [ eq : deathbd ] ) in bd , but the target and the source are switched in the elements of the replacement matrix .note that only if the player is chosen , a change in configuration takes place , that is the strategy configuration after the strategy updating is . put differently, the outcome of both a bd and a db updating may be the same , but the probabilities to reach it may be different . for a sufficiently large number of strategy updating events ( and therefore changes of configuration ) , there may be some configurations that the game occupies more often than others .these may , for instance , be the absorbing configurations with a bit count and . analyzing whether or not these absorbing configurations are reached and how long this takes , gives rise to fixation probabilities and fixation times , which will be discussed in sec .[ sec : fix ] . before , however , we focus on the question of how frequent any configuration is over strategy updating events .the frequency of reaching a configuration scales to the probabilities of birth and death discussed so far .hence , for a bd updating the game landscape can be defined , while for a db updating , we set both with and being a sensitivity weight .both game landscapes retain the configuration space and the neighborhood structure of the player wise strategy landscapes , hence using them as building blocks .the fitness of each configuration summarizes via a fermi function the probabilities to reach the configuration according to the birth and death events of the strategy updating process .the fitness of the game landscape therefore depends on the fitness of each player wise landscape and possible replacement restrictions .moreover , different updating processes cast different game landscapes and out of the same strategy landscapes of the players . given that the are topological alike , and hence might be seen as possessing symmetry properties , different strategy updating rules break the symmetry of the player wise strategy landscapes . at the same time , the bd and db updating processes themselves possess symmetry properties via the birth and death probabilities ( [ eq : birth ] ) and ( [ eq : death ] ) . consequently ( and in the absence of replacement restrictions ) , the game landscapes and retain symmetry properties .replacement restrictions induced by different , however , yield another symmetry breaking . these symmetries ( and broken symmetries ) are reflected by the landscape properties discussed in the next section . the discussion so far has been for a constant network of interaction , that is for a specific matrix . as pointed out in sec .[ sec : netwupdate ] , network updating can be described by a series of adjacency matrices .hence , as the genetic description of the coevolutionary game comprises of the strategy vector _ and _ the network of interaction , the strategy configurations made up by the space could be augmented by interaction configurations built by all possible networks of interaction .consequently , the different interaction graphs enumerated approximately by eq .( [ eq : ldn ] ) could be seen as configurations according to the landscape definitions discussed above . however , in view of the rather large number of possible graphs for given and ( a rough estimate of eq .( [ eq : ldn ] ) for yields ) , an alternative model is to understand different interaction graphs as dynamic instances of the strategy landscape . put differently , the dynamics of the strategy landscape is the results of its variability with respect to the network of interaction .a consequence of such a modeling is that the timely order of the varying network of interaction could be interpreted as temporal neighborhoods according to the neighborhood structure inherent in landscapes . withnetwork updating expressed as dynamic instances of player wise strategy landscapes , we get a dynamic landscape for describing a coevolutionary game .apart from the strategy configuration with neighborhood and the integer time set , the quantity gives payoff based fitness for each configuration , each player , and the network of interaction .the matrix pair of subsequent adjacency matrices specifies how the fitness relates to , thus constructing the transition map .taking up the example of with the same values of the payoff matrix as in sec .[ sec : stratland ] , but coplayers , we get and hence a game that is dynamic with respect to updating the network of interaction . the three dynamic instances are shown in fig . [fig : n5 ] , where fig .[ fig : n5]a is for the adjacency matrix , fig .[ fig : n5]b is for , and fig .[ fig : n5]c is for .it can be seen that the three different networks produce three different player wise strategy landscapes for each player , which means that we indeed obtain dynamic changes over the three instances of graphs on vertices . comparing these strategy landscapes with those for the complete network of interactions ( see fig . [fig : n4 ] ) reveals differences .a first is that for each player , there are now two maxima and two minima .each player retains a maximum ( minimum ) if it itself defects ( cooperates ) , while its two coplayers cooperate ( defect ) .the two maxima ( minima ) come about as it makes no difference for the player s payoff whether the fourth player ( who is no coplayer as ) cooperates or defects .a second difference is that two neighboring configurations may build a block of equal fitness in connection with every configuration belonging to such a same fitness block .consequently , there is neutrality in these fitness landscapes .moreover , these results suggest that the number of maxima and degree of neutrality scales to the number of coplayers , which can be confirmed by numerical experiments for landscapes with more than players . within the given modeling framework of coevolutionary games , the timely order of the adjacency matricesis not associated with a particular updating process of the interaction network .the main reason is the general lack of established algebraic descriptions of network updating .the dynamic landscapes proposed may offer such an algebraic description as the transition map can be formulated uniquely for regular graphs , for instance for the transient between and of the example considered above as . for dynamic player wise strategy landscapes , game landscapes for bd and db updatingcan be defined according to the probabilities of birth / death and expressed as dynamic counterparts of eqn .( [ eq : bd ] ) and ( [ eq : db ] ) . as the fitness of each player now depends on the time variable specifying dynamic instances of the adjancency matrix , the death and birth probabilities , , , are also dynamic .consequently , the static games landscapes ( [ eq : bd ] ) and ( [ eq : db ] ) become dynamic game landscapes : and .these dynamic bd and db landscapes are the main topic of the numerical experiments reported in sec .[ sec : num ] . the game specified by the payoff matrix ( [ eq : payoff ] ) andthe updating processes described in sec .[ sec : gamedynamic ] instantiate evolutionary dynamics describable by the game landscapes ( [ eq : bd ] ) and ( [ eq : db ] ) introduced above . as updating processes such asbd and db depend on random processes , the resulting game dynamics can also be seen as a stochastic process . consequently , stochastic properties such as fixation probability and fixation time have been suggested for evaluating and comparing the long term outcome of evolutionary game dynamics , and studied widely in theory and numerical experiment , see , for instance , . these fixation properties particularly account for whether or not the game dynamics settles on a steady state , and if so , how long this takes on average , and how frequent it is .the fixation probability quantifies how likely it is that one of the two strategies that a player can use ( cooperate or defect ) spreads to the whole population of players , given that only one player started using this strategy .according to the landscape interpretation , this corresponds to reaching one of the two absorbing configurations with bit count and , given that the initial configuration had bit count and , respectively . for each absorbing configuration , there are different configurations that can possibly serve as an initial configuration .hence , as tends to zeros for getting larger , initial configurations getting scarce in the overall topological structure of the game landscape for a sufficiently large number of players . the same also applies to absorbing configurations .this is in agreement with the observation that fixation probabilities generally decrease while increases ( see also the results of numerical experiments in sec . [ sec : results ] ) . as there are two absorbing configurations , two distinct fixation probabilities can be defined , one for complete cooperation and another for complete defection .the probability to reach the configuration where all player cooperate , , is denoted by , while the probability of all players defecting , , is named .fixation probabilities can be analytically calculated for moran processes based on properties of markov chains for well mixed populations , and numerically for games on graphs , . for games on graphs with replacement restrictions , estimates of the fixation probabilities using diffusion theoryhave been reported , . for coevolutionary games considering dynamic networks of interaction of varying degree ,numerical experiments can be carried out . following previous experimental works ,the fixation probabilities are approximated by the relative frequency of fixation .the fixation time quantifies how many changes in configuration it takes on average to reach an absorbing configuration .this corresponds with the average amount of time needed to achieve fixation .the notion of fixation time can be refined by distinguishing which one of the two absorbing configuration is reached , which gives rise to conditional fixation times , .the fixation times for the cooperative and defective absorbing configurations are denoted by and , respectively . as fixation probability and fixation time are the most important quantities in stochastic game dynamics , these quantities are discussed next with respect to the landscape interpretation proposed in the previous sections .the fitness of the landscapes ( [ eq : bd ] ) and ( [ eq : db ] ) derives from payoffs of each player and summarizes the probabilities that a particular configuration is occupied by the game .hence , possible differences in fitness across neighborhoods generate topological features of the landscape .these topological features , in turn , create evolutionary paths on the landscape , which any evolutionary dynamics has to observe . however , the evolutionary dynamics is governed by a move bias towards higher fitness , which is not a move imperative . in other words, the landscape view implies that there are probabilities that the maxima are reached , but no certainties .moreover , these probabilities depend on what exactly the topological features of the landscape are , for instance , on the number of maxima , their distribution and their accessibility . for evaluating the effect of landscape features ,just focusing on the maxima is not sufficient .therefore , different types of landscape measures have been proposed which aim at reflecting , in a more general sense , the impact that landscape features have on evolutionary dynamics , see also the discussion in sec .[ sec : meas ] .fixation occurs if a succession of changes in configuration leads from prescribed initial configurations to the absorbing configurations .fixation probabilities , require evolutionary paths connecting initial configurations with respective absorbing configurations .the values of , scales to how easy or how difficult it is that these evolutionary paths can be accessed or distracted .the fixation time , on the other hand , varies with the length of the evolutionary path .consequently , by analyzing the topological structure of the game landscape , it may be feasible to infer fixation properties .this kind of analysis , however , is impeded by the fact that absorbing configurations in game landscapes are , topologically interpreted , non passable points in the landscape. however , non passable points are not a standard concept of landscapes .perhaps most similar are steady states of a landscape , but there is the difference that the evolutionary dynamics can , under certain conditions , leave a steady state and there is the even more fundamental difference that steady states are by definition maxima of the landscape .absorbing configurations may or may not be maxima of the game landscape . in the same way ,the initial configurations marking the origin of the evolutionary path may or may not be minima of the landscape . the numerical experiments discussed in sec .[ sec : results ] confirm such a characteristics for the game landscapes ( [ eq : bd ] ) and ( [ eq : db ] ) .this line of reasoning suggests that a landscape analysis should take into account that fixation properties are likely to be related to landscapes via the local ( and possibly also the global ) topological features of absorbing and initial configurations . in analogy to the landscape structure , which describes globally the topological features of the entire landscape , these topological features and their interdependences with fixation we may call absorption structure . the numerical experiments reported next section address not only topological features of the landscapes , but also the absorption structure , where the focus is on the local structures .game landscapes can only be visualized as two dimensional projections up to players . for analyzing landscapes with more players , we need to resort to landscape measures . a first measure we look atis modality expressed by the number of local maxima . local maximaare potential steady states on the evolutionary path .hence , the number of local maxima relates to the variety of possible evolutionary paths and consequently to the complexity of the evolutionary dynamics displayed .if there is just one ( smoothly accessible ) maximum , then all evolutionary paths converge to it and the evolutionary dynamics displayed is rather simple . if , on the other hand , there is a large number of maxima , then the possible evolutionary paths may differ from each other massively resulting in more complex evolutionary dynamics . for a landscape a configuration is a local maximum , if , the fitness of this strategy configuration satisfies .moreover , if this condition holds , then is a global maximum . for evaluating the local absorption structure , we need to consider three further local topological features : minimum , neutrality , and saddles .a local minimum has at least one neighbor that has a smaller fitness and no neighbors that have larger fitness than itself .a neutral configuration has only neighbors with the same fitness , which means that landscape area containing and its neighbors is flat .lastly , a saddle has some neighbors that are larger and some other neighbors that are smaller . in numerical experiments ,the number of local maxima can be computed for the game landscapes ( [ eq : bd ] ) and ( [ eq : db ] ) .the same applies to whether the absorbing configuration and its initial configurations are maxima , minima , neutral or saddles .consequently , for the dynamic instances of these landscapes , a time series containing the numbers of local maxima is obtained .the same applies to all other measures of dynamic landscapes .there are two immediate problems with analyzing landscapes by modality expressed by the number of local maxima .first , on a practical level , we find that can only be calculated by enumeration , which entails the proverbial curse of dimensionality .second , on a conceptual level , there is the fact that the pure number of local maxima is a decisive ( and arguable the most important ) factor defining evolutionary paths , but the distribution of the maxima and their accessibility is also profoundly influential . to overcome these issues , other types of measure have been proposed for quantifying smoothness , ruggedness , or neutrality of the landscape .two of them are studied here , correlation length and information content .the correlation length evaluates across the landscape how strongly the fitness of any configuration relates to the fitness of neighboring configurations and hence is a measure of the landscape s ruggedness , . for calculating the correlation length , a random walk on the landscape of length fitness values for each step on the walk are recorded by the sequence and thus a series of neighboring fitness relations is obtained .assuming that the landscape is isotropic , these neighboring fitness relations account for general changes in fitness across the landscape .hence , the autocorrelation of sequence ( [ eq : seq ] ) with time lag defines the landscape s random walk correlation function , where , and .the function measures the correlation between different regions of the fitness landscape and expresses a measure of how smooth or rugged the landscape is .the correlation length derives from the autocorrelation of time lag , .the information content , , is an entropic landscape measure , which also uses the fitness sequence ( [ eq : seq ] ) generated by a random walk on the landscape .it can be interpreted as a measure of the amount of information required to reconstruct the landscape structure . from the time series ( [ eq : seq ] ) , differences in fitness between two consecutive walking steps are coded by symbols , , taken from the set .these symbols are obtained by for a fixed ] .networks of interaction may be described by instances of a random graph , as set out in sec .[ sec : netwupdate ] . based on this description ,varying interaction networks have been interpreted in this paper as to cast dynamic instances of a landscape characterizing the coevolutionary games .instances of interaction networks specify who plays whom in the game , which means that even if for each player the number of coplayers is constant , who in fact the coplayers are is not . as different coplayers may imply different strategies and hence different allocations of payoff , different networks of interaction may result in topologically different game landscapes .put another way , if properties of game landscapes vary over dynamic instances , the variation should be reflected by properties of interaction networks , that is graph theoretical quantifiers of graphs .spectral graph theory defines several such quantifiers , which take advantage of connections between the algebraic description of a graph and its structural properties ; see for instance , upon which the remainder of this section about quantification of graph theoretical properties of interaction networks relies .the main propose of this quantification is to map structural differences of the interaction graph to different values of graph spectral invariants , which , in turn , are interpretable as ( graph theoretical ) network measures . for definitions of these invariants , also see .the quantities considered are based on algebraic properties of the adjacency matrix , or the laplacian matrix , which is derived from to include the degree explicitly .for the matrices and , the spectra of eigenvalues , and , are starting points for further consideration . for connected graphs, we find for spectra of the adjacency matrix that all eigenvalues are real and , while eigenvalues of the laplacian are also all real , and non negative as well as sortable according to .a first quantity is the ( normalized ) energy of a graph : which can be interpreted as the spectral distance between a given graph and an empty graph , and can hence be seen as scaling to the degree of difference between graphs . a second graph theoretical network measure based on the eigenvalues is the independence number which is an approximation of the size of the largest independent set of vertices in a graph .an independent set is a set of vertices in a graph such that no two vertices of the set are connected by a edge .a network measure based on the laplacian derives from the smallest eigenvalue of larger than zeros , , is termed ( normalized ) algebraic connectivity and scales to how well a graph is connected .connectivity denotes the structural property of a graph that removal of vertices or edges disconnects the graph .the laplacian eigenvalue if the graph is not connected , and if the graph is complete ( that is fully connected ) .larger values of indicate graphs with a rounder shape , and high connectivity and girth , while for smaller values of the graph is more path like with low connectivity and girth .also calculated from the laplacian is the expander index which is a measure for the graph possessing expander properties .the expander index has small values for all eigenvalues being close to , and larger values for the opposite .expander graphs are marked by all small sets of vertices usually having a larger number of neighbors .thus , they can be seen as their neighborhood expanding . as far as possible and needed , the graph theoretical quantifiers are normalized with respect to the order of the graph .hence , they can be compared over a varying number of vertices and hence players . in sec .[ sec : results ] results are given that analyze the correlation between the network measures ( [ eq : ener])([eq : expan ] ) and the landscape measures ( [ eq : corrle ] ) and ( [ eq : infcont ] ) .the numerical experiments with the dynamic fitness landscapes of coevolutionary games specify the payoff matrix ( [ eq : payoff ] ) and consider a pd game and a sd game with and , respectively , which is a parametrization as suggested by axelrod s seminal work , .additionally , the effect of varying the ratio ( which encourages or dampens the temptation to defect ) is studied .therefore , results for are contrasted with .the dynamics of the landscape is addressed by examining the effect of varying networks of interaction .algorithms are employed that numerically generate adjacency matrices specifying random regular graphs with given order and degree , .we checked to see if the graphs are connected .if a graph fails the check , there are isolated vertices that may bias controlling the interaction network via the graph s degree and hence such graphs are discarded . for the experiments ,different sets of graphs with prescribed and are generated and used .the absorption structure was analyzed with a set of graphs .some experiments studying landscape measures and fixation properties were done with a set of graphs .these experiments have shown that for a considerable number of different networks , rather similar results are obtained .for this reason and also to facilitate the numerical experiments , unless stated differently the results presented in the figures are based on a set of different interaction networks . in all cases for , the complete set of possible networks of interactionthe experiments are conducted for even to guarantee the existence of graphs for all .different replacement structures are analyzed .the experimental setup follows previous works , , and defines the replacement matrix as random regular graph with given degree and guaranteed connectivity .additionally , the elements are filled with realizations of a random variable uniformly distributed on the interval $ ] .the degree of is set to match the degree of .all these experiments are carried out for bd and db landscapes .other updating schemes such as pc or i m can be treated likewise . for these processestransition probabilities are known , , and hence landscapes similarly to ( [ eq : bd ] ) and ( [ eq : db ] ) can be computed . withthe conventional pc based computational resources that were available , it was possible to experiment within a reasonable time frame with up to players .all experiments employ a linear relationship between payoff and fitness with .the game landscapes are computed with a sensitivity weight . for calculating the correlation length and the information content ,a random walk of length was used , and the results are averaged over independent walks .numerical experiments have shown that the results obtained are statistically equivalent over different initial configurations that the walks starts with .hence , it appears reasonable to assume for the tested cases that the game landscapes are isotropic .the information content ( [ eq : infcont ] ) is computed with a sensitivity level that scales to the number of players by .the numerical experiments calculating fixation properties are based on repetitions .this is a rather small amount compared to other recent experimental works , e.g. .some auxiliary experiments with a larger amount of repetition , however , have shown that the values of the fixation properties analyzed converge well so that the numerical setup used appears sufficient for up to players. figs .[ fig : modal ] , [ fig : correl ] and show the landscape measures number of local maxima , correlations length and information content over and .the red lines indicate a bd updating , the green lines a db updating .in addition to the quantities averaged over the up to different interaction networks tested ( horizontal lines ) , the vertical spikes indicate the range between the least and the largest value of , or over these networks described by the adjacency matrices . this presentation and color codeis kept for all landscape measures and fixation properties . in fig .[ fig : modal ] , the number of local maxima are given as semi logarithmic plots , except fig .[ fig : modal]a showing a pd game for . considering , the results show that db updating produces generally more maxima that bd updating .an exception is the pd landscape for , which has only one maximum for all tested and , and both bd and db updating .it hence is unimodal while all other landscapes are multimodal .also , accounting for does not reflect the symmetry between bd and db landscapes .again , the exception is the pd game with . for ,there is a larger similarity between the landscape for the pd game and the sd game , compared with .moreover , for the sd game with , we find for db updating that decreases for a given and getting larger .for such a clear trend is not visible .another observation is that for all landscapes ( except pd with ) the number of local maxima sometimes shows vertical spikes indicating the amount to be not constant for a given and and varying networks .in other words , there is a certain variety in the number of local maxima over instances of interaction networks expressed as graphs .further note that for getting larger .all these results support previous findings about evolutionary games , for instance that pd games and bd updating do not provide an advantage for cooperators , .thus , for pd the small number of maxima of the player wise landscapes ( compare to fig . [fig : n4 ] ) corresponds with the small number of maxima in the game landscape .by contrast , for the sd game and db updating not only configurations where the defecting player earns the largest payoff are maxima of the game landscapes .consequently , the number is larger .also , an increased ratio leads to the number of being more volatile over .in addition , for the spread of is over a larger range , indicating that different networks of interaction produce landscapes topologically more different . for the landscape measures and in figs .[ fig : correl ] and , we find almost identical results for bd and db landscapes , yet the different games and different values of can be distinguished , albeit not as clearly as for .hence , correlation length and information content largely reflect the symmetry of the game landscapes .it can also be seen that the variety over different networks of interactions is slightly stronger for than for .we next analyze the effect of replacement restrictions imposed by the replacement graph not being fully connected with evenly weighted edges , and focus on the differences between replacement restriction being set or not .the results are for and a pd and a sd game , see fig .[ fig : mod_rest ] , [ fig : lamb_rest ] and [ fig : inf_rest ] .a main observation is that replacement restrictions modify the game landscapes , which is also shown by the landscape measures .for instance , the number of local maxima for the pd game with and db updating is no longer strictly unimodal ( compare figs . [fig : modal]a and [ fig : mod_rest]a ) .interestingly , for bd updating , even replacement restrictions do not alter unimodality .for the sd game , the inverse proportionality between and ceases , and generally the number of local maxima does vary more strongly for different networks of interaction .these characteristics can also be seen for the landscape measures correlation length ( see fig . [fig : lamb_rest ] ) and information content ( see fig . [fig : inf_rest ] ) . here , the main difference is that the measures are no longer the same ( or almost the same ) for bd and db updating .this reflects the broken symmetry that replacement imposes on bd and db game landscapes .generally , replacement restrictions imply landscapes that vary more substantially over different networks of interaction . furthermore ,as there is an inverse relationship between ruggedness and correlation length , it can be noted that ruggedness decreases as the number of player gets larger .this effect , which is also visible very slightly in fig .[ fig : correl ] , is amplified by replacement restrictions . in figs .[ fig : fix_p_coop ] and [ fig : fix_t_coop ] fixation properties of the cooperative absorbing configuration with are given over and .the fixation probability is zero for the pd games and bd updating , which corresponds to previous results showing that cooperation is never favored or beneficial under such an updating , .apart from this result , falls with the number of players getting larger , but for a given , the fixation probability is the same for a varying number of coplayers .these results are in line with the findings that regular evolutionary graphs are generally isothermal . moreover , except for very small numbers of players ( and partly ) , the fixation probability for the well mixed case ( ) is also the same as for a smaller number of coplayers .this , however , is only the case for averages over interaction networks .there are for a constant and interaction networks with fixation probabilities larger or smaller than average indicated by the vertical spikes ( for there can not be a spike as only one instance of exists ) . hence , these results suggest that the graph structure of the interaction network does matter for . moreover , which causes the largest or smallest varies over and . regarding the fixation times , similar observations can be made .note that for the pd games no fixation time for bd updating are given as the fixation probability is zero .for the sd game the fixation times for bd updating are much larger than for db , and this effect is even amplified by an increase in the ratio , which stand for encouraging the temptation to defect ; compare fig .[ fig : fix_t_coop]c for with fig .[ fig : fix_t_coop]d for . particularly , for fixation times are getting very large .fixation properties of the defective absorbing configuration with are given in figs .[ fig : fix_p_def ] and [ fig : fix_t_def ] . all game settings produce fixation probabilities . apart from this ,the results are similar to the general trends for the cooperative absorption , namely fixation probability differs for bd and db updating , falls with an increasing number of players and is isothermal for given and a varying number of coplayers .the fixation times in fig .[ fig : fix_t_def ] also show some similarity , but also differences .the main observation is that for the sd games the maximal fixation times are not substantially larger than for pd games .we next consider the local absorption structure of the game landscapes , which are based on up to different interaction matrices .the results for pd and sd games with bd and db landscapes and and are given in tab .[ tab : absorp ] . the final absorption structure ( f ) indicates the local topological features of the absorbing configurations , while the initial absorption structure ( i ) denotes the features of the initial configurations from where potential paths to the absorbing configuration may start .the features are given for both the cooperative absorbing configuration with and the defective absorbing configuration with .[ tab : absorp ] cooperative absorbing defective absorbing configuration : configuration : & bd & f : mn & f : nt & bd & f : mx & f : mx + & & i : sd&i : sd&&i : sd & i : mn + & db & f : mx & f : mx,mn,nt & db&f : mn & f : mn + & & i : sd&i : sd&&i : sd&i : mx , sd + & bd & f : mx & f : mx & bd & f : mx & f : mx + & & i : sd&i : sd&&i : sd , mn & i : sd , mn + & db&f : mn&f : mn&db&f : mn&f : mn + & & i : sd&i : sd&&i : sd , mx&i : sd , mx + never global for only local maxima for and only local minima for and only for , and , the results in tab . [ tab : absorp ] show some general features of the local absorption structure for the game settings considered , which in turn can be interpreted as specific properties of the game landscape as proposed by eqn .( [ eq : bd ] ) and ( [ eq : db ] ) .a first feature is that for both games , both parameter values of and both absorbing configurations , the absorption structure of bd updating generally differs from db updating .this may answer the question of why game landscapes that are symmetric with respect to bd and db for no replacement restrictions yield fixation properties that do differ from bd to db .a possible explanation is that while the game landscapes are topologically the same as shown by the landscape measures and ( see figs .[ fig : correl ] and ) , their absorption structure is not .this suggests the absorption structure to be a determining factor in the relationships between game landscapes and fixation .a second feature is that for all settings tested there is rarely a variety over the number of players and coplayers . in other words , within a given game setting changing the number of players and coplayers does not alter the absorption structure .an exception to that rule is the initial structure of the defective configuration and , which intermixes a saddle and maximum / minimum .though , the maximum / minimum is only for two cases , namely with and with , that is the well mixed cases for a rather small number of players .a third feature is a general absence of variety over different interaction matrices .at least for the interaction matrices tested , the absorption structure is largely invariant over interaction networks .there is an exception with the sd game , and db updating to be discussed later .before , two further features can be noted .the maxima / minima of the absorption structure are mostly global , with exception of the cooperative absorption structure and .finally , it can be observed that the absorption structure is inverse complementary for bd and db , the meaning of which is that if there is a maximum for bd , then db has a minimum , and vice versa , while a saddle remains a saddle .this follows from the symmetry properties via the birth and death probabilities ( [ eq : birth ] ) and ( [ eq : death ] ) as discussed in sec .[ sec : gameland ] .there is again , however , an interesting exception in form of the sd game with . for this game setting , we find that the cooperative final absorbing configuration is neutral for bd updating , and a combination of maximal , minimal and neutral for db updating .further analysis shows that these topological features vary over , and . for most and the absorbing configuration is neutral , while for a few and is either a maximum or a minimum . in these cases ,the topological features are fixed for varying interaction networks .in addition , there are and for which varying interaction networks give either a mixture of maxima and neutral , or a mixture of minima and neutral . for the defective absorbing structure ,the initial configurations vary over , where for most and we have neutrality , while for some other and there is a mix of neutral and maxima .numerical evaluation affirms that such a variety of the topological features of the absorption structure over interaction networks can be observed for other game settings as well , particularly those on the line .the sd game with is exactly on this line for the parametrization used ( , , ) .based on the absorption structure of the game landscapes , the next set of numerical results deals with correlations between landscape measures and fixation properties . as already discussed in sec .[ sec : fix ] , relationships between topological structures of the game landscapes and fixation events are likely to be shaped and typified by the absorption structure .the results for the correlations between landscape measures and fixation properties of the cooperative absorbing configuration are shown in fig .[ fig : corr_l_coop ] for the correlation length and fig .[ fig : corr_i_coop ] for the information content , while the same is done for the defective absorbing configuration in figs .[ fig : corr_l_def ] and [ fig : corr_i_def ] .the results are for all game settings considered with red markers indicating a pd game with bd updating , green pd game and db , blue is sd game with bd and yellow sd and db .the lines connecting the markers are depicted to ease following trends .the pearson product moment correlation coefficient is calculated to aggregate over interaction networks and the number of players for each number of coplayers .the advantages of such a mode of calculation are that the database for analyzing correlations becomes sufficiently large ( it comprised of data for the instance of interaction matrices times the number of players ) and that trends over a varying number of players are captured. however , there should be an awareness that the calculation implicitly assumes that fixation properties and landscape measures scale on in ways compatible with the dependence on . comparing the results in figs .[ fig : corr_l_coop ] and [ fig : corr_i_coop ] tells us that the correlations between and either or are volatile and hardly evaluable , while for there are clear trends .the same can be said about the defective fixation , see figs .[ fig : corr_l_def ] and [ fig : corr_i_def ] for and .further analysis ( not shown in a figure ) confirms this to remain if the calculation is done for each and .hence , it appears to be justified to conclude that the information content correlates more clearly to fixation properties than the correlation length .another results ( also not shown in a figure for brevity reasons ) shows that the number of local maxima correlates poorly to fixation properties .apart from the fact that there is no correlation for the pd game , bd updating and the fixation of cooperation as , there are further results to note . for ,the correlations are always zeros .this is why : with the experimental setup employed ( see sec .[ sec : setup ] ) only for , there can be , with the additionally meaning that such a game is well mixed with just one instance of the interaction matrix . as correlationcan not be based on a single data pair , the correlation must be zero .furthermore , it can be seen that there is a negative correlation between and ( and ) , while the correlation between and ( and ) is positive .this appears reasonable as the fixation probabilities fall with increasing number of players , yet the fixation times grow .also , it can be observed that generally the correlations are strongest for small number of players and weaken before they reach zero for . regarding the shaping and typifying effect of absorption ,the following can be observed comparing the local absorption structure in tab .[ tab : absorp ] with the correlations in figs .[ fig : corr_i_coop ] and [ fig : corr_i_def ] . from a topological point of viewthe correlation should be particularly strong if the absorbing configuration is a maximum and the initial configurations are all minima .for absorbing and initial configurations being minima and maxima , the opposite should apply . only for one example , the final absorbing configuration is a maximum and the initial configurations are minima for all , and : the defective absorption of the sd game with . for this case ,the correlation between and is indeed slightly stronger than for the other cases .however , for the correlation between and the opposite is true . in general, it can be noted that the correlations for all settings ( except pd - bd for the cooperative absorption ) are clearly visible and rather similar .it can be concluded that while there are some hints in the local absorption structure to clarify the correlations between landscape measures and fixation properties , the explanatory framework should not be overstretched .analogous to a landscape analysis that only focuses on selected points in the landscape ( for instance the maxima / minima ) , the local absorption structure only captures a subset of the topological structures that shape coevolutionary game dynamics .therefore , extensions toward a global absorption structure seem desirable. a last set of experiments reports correlations between landscape measures and network measures of the interaction matrices . figs . [ fig : corr_matrix1][fig : corr_matrix3 ] give the correlations between information content and either graph energy , eq .( [ eq : ener ] ) , independence number , eq .( [ eq : ind ] ) , algebraic connectivity , eq .( [ eq : conn ] ) , or expander index , eq .( [ eq : expan ] ) .again , the pearson coefficient is used aggregating over interaction networks and the number of players for each number of coplayers .the same color code for the game settings as for the correlations with fixation properties is used .it can be seen that the information content correlates well to all networks measures , and there are small differences between the landscapes with ( fig .[ fig : corr_matrix1 ] ) and ( fig .[ fig : corr_matrix2 ] ) .it is conspicuous that the results are indistinguishable for bd and db landscapes , which fully reflects the symmetry properties of these landscapes .the symmetry is broken by replacements restrictions .the correlations between the network measures and for and replacement restrictions reported in fig .[ fig : corr_matrix3 ] confirm this as there are differences between bd and db .the correlations , however , are less smooth over as compared to the results in figs .[ fig : corr_matrix1 ] and [ fig : corr_matrix2 ] , which is most likely due to the additional stochastic nature of replacement restrictions .lastly , two more observations can be noted .a first is that the correlations between and the networks measures are negative for graphs energy , independence number and expander index , while they are positive for algebraic connectivity .the main reason is that amongst the network measures studied only algebraic connectivity increases continuously for getting larger in both mean and variance over instances of the interaction matrices .a second is that for the graph energy and the expander index the correlations are weak for both small numbers of coplayers ( ) before they get stronger to weaken again for larger number of players ( ) . for the other two network measures the weakening is only for , which is similar to the correlations with fixation properties .the results given above set out relationships between landscape measures and both fixation properties and network properties , and argue that dynamic landscape models of coevolutionary games are viable . in this section ,features and implications of such a modeling approach are discussed and some concluding observations are offered . 1 . an essential part of the experimental study is the study of correlations between landscape measures and both fixation properties and quantifiers of the networks of interaction .the main results are that information content scales much better to fixation properties and network measures than the other landscape measures considered .particularly ruggedness as measured by the correlation length relates less clearly and much weaker to fixation properties than information content , which is understood to account not specifically for ruggedness , but more for the interplay between smooth , rugged and flat landscape areas .hence , a conclusion may be that also for game landscapes ruggedness alone is not a good predictor for evolutionary dynamics , as also reported for other types of fitness landscapes , .as there are additional entropic landscape measures based on the information content , for instance partial information content , information stability or density basin information , , it might be interesting to study whether these measures also scale well for game landscapes and may offer further insight into game dynamics .regarding the correlations between landscape measures and quantifiers of interaction networks , the results are more consistent .there are clear correlations for all four of the network measures considered , with algebraic connectivity and independence number scaling slightly better than graph energy and expander index .however , it should be noted that the correlations established between the landscape measures and network measures are based on the laplacian or adjacency spectra of the adjacency matrix . as these spectra do not uniquelydetermine the interaction graph , there might be correlations between the graph structure of the interaction network and the game landscape that are not captured .the discussion might be extendable by considering alternatives , for instance generalized graph distance measures as reported by .the experiments studying the effect of different networks of interaction with given and on fixation properties and landscape measures only report mean , maximum , and minimum values and their interdependencies .further statistical analysis , for instance considering variances , or higher order moments , or estimates of the underlying distribution , are deliberately omitted .the main argument is that we should beware of drawing conclusions based on such a far reaching statistical analysis as it is not clear what it might really signify .for instance , for coplayers , the number of different graphs can be enumerated exactly by eq .( [ eq:2regular ] ) .the experiments presented are for up to players .accordingly , at the upper limit of the experimental setup , we find . there is no alternative but to conclude that any number of numerically testable networks of interaction represents just a tiny subset of all interaction networks . at the same time it is far from being clearhow well the finite number of graphs generated by the numerical procedures represent all possible different graphs for given and .thus , it might be possible that some trends are biased by the algorithmic process of numerically generating interaction networks .further work is needed to clarify these interdependencies .3 . the experimental results showing landscape measures and their correlations to fixation properties as well as to graph theoretical quantities of the interaction networks are for the specific algebraic description of the game landscapes ( [ eq : bd ] ) and ( [ eq : db ] ) .the algebraic form of how to cumulate death and birth probabilities from the player wise strategy landscapes is definatory and was employed as it fitted well to fixation properties and previous results known about social dilemma game dynamics .it is an open question whether an alternative algebraic form of ( [ eq : bd ] ) and ( [ eq : db ] ) can achieve similar or even better results .similarly , the results obtained here are specific for the linear relation .hence , it might be interesting to analyze how different payoff to fitness relations modify the results , for instance the exponential relations or , as suggested by .this may go along with experimental studies of different levels of the intensity of selection , which is also opened up by the game landscape approach proposed in this paper . in the case of weak selection , that is for , the player wise strategy landscapes lose their distinct topological features , which yields ( in the absence of replacement restrictions ) a game landscape that is neutral .consequently , the game dynamics on this landscape would be random drift . for larger or large values of topological features of the player wise strategy landscapes become more prominent , for which the game landscape is more rugged . from this line of argumentit can be conjectured that there is a direct relation between the intensity of selection and the ruggedness of the game landscape , which might be verifiable by future studies .the results presented are for up to players .if the trends identified in these results remain valid for a larger number of players needs to be studied in future work .a limitation in these studies surely is that the number of configurations to be analyzed in the landscapes increases exponentially with the number of players , hence setting bounds as to how far such experiments might be extendable .therefore , with the computational resources currently available the modeling framework is likely to be confined to a moderate number of players .however , for an increased number of players there is the framework of replicator dynamics which sufficiently describes game dynamics for populations becoming large , .some primary experiments have shown that for replacement restrictions , the correlation between landscape measures on the one hand , and fixations and network properties on the other , cease .apparently , the replacement restrictions seriously modify the structure of the game landscapes .it is another open question if these relationships can be reestablished by taking into account properties of the replacement process , for instance the absorption structure of the restricted network .this may be extended by foregoing the setting that the degree of the replacement matrix matches the degree of the adjacency matrix .coevolutionary games cast players that update their strategies as well as their networks of interaction . in this study ,a reinterpretation of coevolutionary games as dynamic fitness landscapes is proposed .the dynamic landscapes are based on three major components : ( i ) a description of strategy updating as a moran process with definable probabilities of strategy transitions , ( ii ) a formulation of network of interaction updating as instances of random regular graphs , and ( iii ) a linear relation between payoff and fitness . using these components , payoff related fitness landscapescan be defined for each player .it is further shown that coevolutionary game dynamics can be expressed by a game landscape derived from these player wise landscapes by including the strategy updating process . moreover ,different strategy updating processes , such as death birth ( db ) or birth death ( bd ) produce different game landscapes , which can be seen as strategy updating breaking the symmetry of the play wise landscapes . in numerical experimentsit has been demonstrated that landscape measures such as modality , ruggedness and information content allow to differentiate between different game landscapes .fixation probabilities and fixation times have been calculated as well as network measures characterizing the networks of interaction of the coevolutionary games . by correlation analysisit has been shown how the landscape measures relate to both fixation properties and network measures .the approach presented is a technique for analyzing coevolutionary games by landscapes .moreover , the approach is not restricted to moran processes as long as strategy transition probabilities can be derived , at least approximately .finally , networks updating is currently modeled as a given sequence of random regular graphs , but should be understood as a transition process , for instance by using reproducing graphs , as a tool to refine the description of transitions between adjacency matrices .different settings of the game represented by the numeric values of the payoff matrix and different rules of the strategy updating result into a large variety of coevolutionary game dynamics .a considerable number of works have analyzed and discussed this game dynamics with respect to fixation properties such as fixation probability and fixation time from both a theoretical as well as an experimental point of view , .the results reported here contribute to this discussion by offering a fitness landscape view as an alternative explanatory framework .in other words , by the approach presented coevolutionary games may become amenable to be analyzed by dynamic landscapes .g. w. greenwood and d. ashlock , evolutionary games and the study of cooperation : why has so little progress been made ? in : h. abbass , d. essam , r. sarker ( eds . ) , proc .ieee congress on evolutionary computation , ieee cec 2012 , ieee press , piscataway , nj , 18 , 2012 .greenwood , g. w. , avery , p. , 2014 .does the moran process hinder our understanding of cooperation in human populations ? in : rudolph , g. , preuss ,m. ( eds . ) , proc . ieee conference on computational intelligence and games , ieee cig 2014 , ieee press , piscataway , nj , pp . 16 .hindersin , l. , traulsen , a. , 2015 .most undirected random graphs are amplifiers of selection for birth - death dynamics , but suppressors of selection for death - birth dynamics .plos comput biol 11(11 ) : e1004437 .doi:10.1371/journal.pcbi.1004437 .richter , h. , 2014a .fitness landscapes that depend on time . in : richter , h. , engelbrecht , a. p. ( eds . ) ,recent advances in the theory and application of fitness landscapes , springer verlag , berlin heidelberg new york , pp .265299 .richter , h. , 2014b .codynamic fitness landscapes of coevolutionary minimal substrates . in : c. a. coello coello ( ed . ) proc .ieee congress on evolutionary computation , ieee cec 2014 , ieee press , piscataway , nj , pp . 26922699 .richter , h. , 2015 .coevolutionary intransitivity in games : a landscape analysis . in : mora , a. m. , squillero , g. ( eds . ) , applications of evolutionary computation - evoapplications 2015 , springer - verlag , berlin , pp . 869881 .richter , h. , 2016 .analyzing coevolutionary games with dynamic fitness landscapes . in : ong , y. s. ( ed . ) , proc .ieee congress on evolutionary computation , ieee cec 2016 , ieee press , piscataway , nj , pp . 610616 .wormald , n. c. 1999 .models of random regular graphs . in : lamb , j.d ., preece , d.a .( eds . ) , surveys in combinatorics , london mathematical society lecture note series , vol .267 , cambridge university press , cambridge , pp . 239298
|
players of coevolutionary games may update not only their strategies but also their networks of interaction . based on interpreting the payoff of players as fitness , dynamic landscape models are proposed . the modeling procedure is carried out for prisoner s dilemma ( pd ) and snowdrift ( sd ) games that both use either birth death ( bd ) or death birth ( db ) strategy updating . with the main focus on using dynamic fitness landscapes as an alternative tool for analyzing coevolutionary games , landscape measures such as modality , ruggedness and information content are computed and analyzed . in addition , fixation properties of the games and quantifiers characterizing the network of interaction are calculated numerically . relations are established between landscape properties expressed by landscape measures and quantifiers of coevolutionary game dynamics such as fixation probabilities , fixation times and network properties .
|
this is an attempt to `` derive '' space from very general assumptions : \1 ) first we postulate the existence of a phase space or state space , which is quite general and abstract .it is so to speak an `` existence space '' , with very general properties , and to postulate it is close to assume nothing .so we start with the quantized phase space of very general analytical mechanics : where is huge .this is ( almost ) only quantum mechanics of a system with a classical analogue , which is a very mild assumption .\2 ) for the hamiltonian we then examine the statistically expected `` random '' functional form ( random and generic ) .\3 ) in the phase space we single out an `` important state '' and its neighbourhood - the `` important state '' supposedly being the ground state of the system .the guess is that the `` important state '' is such that the state of the universe is in the neighbourhood of this `` important state '' - which presumably is the vacuum .the state we know from astronomical observations is very close to vacuum .according to quantum field theory this means a state which mainly consists of filled dirac seas , with only very few true particles above the dirac seas , and very few holes .this vacuum is our `` important state '' , supposedly given by a wave packet .if the system considered is the whole universe , each point in the phase space is a state of the world .classically , a state is represented as a in phase space , but quantum mechanically , due to heisenberg , this phase space point extends to a volume .now assume that this volume is not nicely rounded , but stretched out in some phase space directions , and compressed in others .the phase space has dimensions , so a wave packet apriori fills a -dimensional region .our assuption is that the vacuum wave packet is narrow in roughly of these dimensions .the vacuum state is thus extended to a very long and narrow surface of dimension in the phase space ( where is half the phase space dimension ) .the really non - empty information in this assumption is that some of the widths are much smaller than others . is moreover enormous , equal to the number of degrees of freedom of the universe , so our model is really like a particle in dimensions , .the `` important state '' is one where `` the particle '' is in a superposition of being in enormously many places ( and velocities ) .we envisage the points along the narrow , infinitely thin wave packet as embedded in the phase space , and that they in reality are our space points . in relation to this infinitely narrow `` snake '' , these points are seemingly `` big '' ( one can imagine the points as almost filling up the snake volume in the transversal direction ). in the simplest scheme half of the phase space dimensions are narrow on this snake , and the other half are very extended , long dimensions on the snake . along the snake surface , the `` important state '' vacuum wave packet , i.e. the wave function of the universe ,is supposed to be approximately constant . with constant , reparametrization( once it has been defined ) under continuous reshuffling of the `` points '' along the long directions of the wave packet , is a symmetry of the `` important state '' . the idea is to first parametrize the `` longitudinal '' dimensions so gets normalized to be all along the snake .it is however not we are most interested in , but the probability of the universe to be at x , corresponding to where stands for transverse . with some smoothness assumptions, the longitudinal dimensions will be like a manifold , i.e. the points given by the longitudinal dimensions constitute a `` space manifold '' .since is huge , the wave packet extension is probably also huge . andsince there is a huge number of possibilities in phase space , the snake is most certainly also very curled . a wave packet can be perceived as easily excitable displacements of the transversal directions of the -dimensional snake ( approximate ) manifold .there are presumably different and at different points on the manifold , and states neighbouring to the vacuum ( `` the important state '' ) correspond to wave packets just a tiny bit displaced from the vacuum .thus the true state is only somewhat different from the vacuum ( there is a topology on the phase space , so `` sameness '' and `` near sameness '' can be meaningfully defined ) .corresponding to different points on the long directions of the wave packet ( manifold ) , `` easy '' excitations can then be represented as some combinations of the ordered set , where and are different phase space points of the -dimensional manifold .the `` easy '' degrees of freedom are thus assigned to points on the manifold , so an `` easy '' displacement on the snake is extended over some region along the snake , that is , in x. in that sense the `` easy '' degrees of freedom can be interpreted as functions of x , , , .... , which actually look like fields on the manifold ( this is just notation , but in some limit it is justified ) .the wave packet consisting of easily excitable displacements , can then be perceived as superpositions of the .a field is just degrees of freedom expressed as a function of x ( a field actually has to be a degree of freedom , in the sense that it is among parameters describing the state of the universe ) , and these superpositions really seem to be fields .now , let us make superpositions of such `` easy '' displacements to form one only non - zero displacement very locally , this is certainly legitimate .but with the identification of the snake with space ( or the space manifold ) , we should require that changing a field only at corresponds to keeping the snake unchanged , except at .so far we have identified the `` important state '' as the `` ground state '' , i.e. the classical ground state snake .now consider the classical approximation for directions transverse to the snake : in the transverse directions ( ) , taking as function of at the minimum of the crossing point with the snake ( chosen to be the origin ) , the taylor expansion of with regard to near the snake is given by ( discarding unimportant constants ) second order expansions we now diagonalize , i.e. look for eigenvalues of the matrix , where the `` easy modes '' correspond to the lowest eigenvalues . from smoothness considerationsthese eigenvalues can be defined as continuous and differentiable as functions of x , where x are the coordinates along the snake .so , if , we could strictly speaking identify these eigenvalues by enumeration : the lowest , next lowest , etc . , except for crossings . as an example take a very specific hamiltonian giving cotecurves of by choice of coordinates , so , and the commutator [ being very complicated . until now , our main assumption is that the world is in a state in the neighbourhood of `` the vacuum snake '' .the true snake is in reality a state that can be considered a superposition of a huge number of states that are all needed to be there in the ground state because there are terms in the hamiltonian with matrix elements between these states ( of which it is superposed ) .we could think of these terms enforcing the superposition for the ground state as some kind of `` generalized exchange forces . '' to go far away from the snake would be so rare and so expensive that it in principle does nt occur , except at the big bang .it is also possible that the snake is the result of some hubble expansion - like development just shortly after big bang .it must in reality be the expansion that has somehow brought the universe to be near an effective ground state or vacuum , because we know phenomenologically from usual cosmological models that the very low energy density reached is due to the hubble expansion .thinking of some region following the hubble expansion , its space expands but we can nevertheless consider analytical mechanical systems . starting with a high energy density state , i.e. rather far from vacuum , the part of the snake neighbourhood which is used gets smaller and smaller after big bang . already very close to the singularity - if there were one - the only states were near the snake .we may get away from the `` snake valley '' , but only at planck scale energies . and we will probably never have accelerators bringing the state very far away from the snake .so far , we have identified `` the snake '' in the phase space of the very general and very complicated analytical mechanics system quantized .aiming at deriving a three - dimensional space , we must have in mind that this manifold , which is the protospace , has a very high dimension of order which is the number of degrees of freedom of the whole universe .if that were what really showed up as the dimension of space predicted by our picture , then of course our picture would be immediately killed by comparison with experiment .if there shall be any hope for ever getting our ideas to fit experiment , then we must at least be able to speculate or dream that somehow the effective spatial dimension could be reduced to become 3 .for many different reasons , it seems justified to believe that 3 is the dimension of space .the naive argument is that we experience space as 3-dimensional , the number of dimensions is however not to be taken for granted , as we know from e. g. kaluza - klein , and string theory .we shall in the following at least refer to some older ideas that could make such a reduction possible .for instance one can have that in some generic equations of motion one gets for the particle only non - zero velocity in three of the a priori possibly many dimensions .in the 1920-ies paul ehrenfest argued that for a = -dimensional spacetime with , a planet s orbit around its sun can not remain stable , and likewise for a star s orbit around the center of its galaxy . about the same time , in 1922 , hermann weyl stated that maxwell s theory of electromagnetism only works for , and this fact _ ``... not only leads to a deeper understanding of maxwell s theory , but also of the fact that the world is four dimensional , which has hitherto always been accepted as merely accidental, become intelligible through it . '' _ the intuition that four dimensions are special is also supported by mathematician simon donaldson , whose work from the early 1980-ies on the classification topological four - manifolds indicates that the most complex geometry and topology is found in four dimensions , in that only in four dimensions do exotic manifolds exist , i.e. 4-dimensional differentiable manifolds which are topologically but not differentiably equivalent to the standard euclidean .the existence of such wealth in 4-dimensional complexity is reminiscent of leibniz idea that god maximizes the variety , diversity and richness of the world , at the same time as he minimizes the complexity of the set of ideas that determine the world , namely the laws of nature .only , leibniz never told in what dimensions this should be the case , but according to donaldson , this wealth of structure is maximal precisely in a 4-dimensional spacetime manifold . another way to `` derive '' dimensions ,is by assigning primacy to the weyl equation .the argument is that in a non - lorentz invariant world , the weyl equation in dimensions requires less finetuning than other equations .this means that in dimensions the weyl equation is especially stable , in the sense that even if general , non - lorentz invariant terms are added , the weyl equation is regained .so in this scheme both dimensions and lorentz invariance eventually emerge .before dimensions there is no geometry . starting with an abstract mathematical space with hermitian operators and , and a wave function in a world without geometry , choose a two - component wave function , where is the energy . in vielbein formulationthis is , which is the weyl equation with hermitian matrices that are the pauli matrices , , .the vielbeins are really just coefficients coming about because we write the most general equation .the weyl equation is lorentz invariant and the most general stable equation with a given number of -components , and as a general linear equation with hermitian matrices , it points to . in dimensionsthe weyl equation reads =(0,1,2,3 ) , and the metric is of rank=4 . if the dimension , there is however degeneracy . for each fermion, there are generically two weyl components .if we had a generic equation with a 3-component , we would in the neighbourhood of a degeneracy point in momentum space , have infinitely many points with two of the three being degenerate .assume that has components , .consider a -dimensional subspace of the -space spanned by the -components , with , and at the `` -degenerate point '' , there is a -dimensional subspace in -space ( -dim ) for which , with only one for the whole -dimensional subspace ( degenerate eigenvalue with degeneracy - the eigenvalue is constant in the entire -dimensional subspace ) . in the neighbourhood we generically have extra in , where for which .there are lower degeneracy points in the neighbourhood ( meaning -combinations with more than one polarization ) , where in the situation with two polarizations . in the above figure the 2-generate point and the curves outside of represent the situation where only one eigenvector in -space is not degenerate . in the neighbourhood of a `` generic '' 3-degenerate ( or more ) pointthere are also 2-degenerate points .but the crux is the filling of the dirac - sea .think of the dispersion relation as a topological space : can we divide this topological space into two pieces , one `` filled '' and one `` unfilled '' so that the border surface only consists of degenerate states / dispersion points ? if not , we have a `` metal '' .the question is whether there is a no - metal theorem . to begin with, we can formulate one almost trivial theorem : if the border contains a more than 3-degenerate point , we generally either have a metal or else 2-degenerate points on this border .there is also the disconnected dispersion relation , corresponding to an insulator .counter example : imagine a 6-dimensional weyl equation . in this case, the border has only one point in the 6-dimensional weyl , so there is only a 4-degenerate point and no 2-degenerate points on the border .the statement about the stability of the weyl equation in dimensions would thus be false if the 6-dim weyl were `` generic '' .but it is not , so there is no problem . in dimensionsthe number of matrices is ( where project to weyl , i.e. the handedness ) , and the weyl has components .that means that there are matrix elements in each projected . assuming that the dimension is even , normal matrices ( i.e. dirac gamma matrices ) have matrix elements in each , for , one can form matrices which on the one hand act on the weyl field ( with its components ) , but on the other hand are not in the space spanned by the projected -matrices .one could in other words change the weyl equation by adding some of these matrices , thus for the weyl equation is not stable under addition of further terms .so the weyl equation is not `` generic '' for , i. e. it so to speak has zero measure ( in the sense that if you write down a random equation of the form \psi = 0 ] , where are defined in four - dimensional spacetime represented by x , the reparametrization invariance implying that = s[\psi] ] we formulate some theorems : with our assumptions , the `` action '' ] , where , , the invariance meaning that = s[\psi] ] only depends on how many in each group are identical .all aberrances belong to a null set , and if we ignore this null set , we have } { \delta \psi(y^{(1)}) ... \delta \psi(y^{(k ) } ) } = f_k\ ] ] which is independent of the s . we then have = \sum_{k=0}^{\infty } \frac{1}{k ! } \int \cdots \int \frac { \delta^k s}{\delta \psi(y^{(1 ) } \cdots \delta \psi(y^{(k ) } ) } \psi(y^{(1 ) } \cdots \psi(y^{(k)}d^4y^{(1 ) } \cdots d^4 y^{(k ) } = \\ = \sum \frac{f_k}{k ! }\int\psi(y^{(1 ) } \cdots \psi(y^{(k ) } d^4y^{(1 ) } \cdots d^4 y^{(k ) } \end{split}\ ] ] and so we got `` mild '' locality of the form ( [ local ] ) , i.e. some function of usual action - like terms ( in reality `` mild '' super local where super stands for no derivatives ) . now ,if the null set argument is incorrect , consider that and where are constants .here we integrate over all points , whereby the same points might reappear several times .the resulting action is of the form now , what does such an action look like locally ?we can taylor expand : }{\delta \psi(y)}|_{\psi=\psi_a } = \nonumber\\ \displaystyle\sum_{\chi=1 } \frac{\partial f}{\partial ( \int\psi(x)^{\chi}\mathrm{d}^4x ) } \chi \psi(x)^{\chi - 1}= f(\psi(x))\ ] ] where can be locally approximated with a constant , and depends on what happens in the entire universe .we now have a situation where ( where the function h is defined so that i.e. it is the stem function of f ) , corresponding to a super local lagrangian . as an exercise we will consider a theory with and ( a contravariant vector field ) , keeping in mind that and transform differently under diffeomorphisms .taylor expanding the functional ] .this `` derivation '' of locality were initiated in collaboration with don bennett .we have in this article sought to provide some - perhaps a bit speculative - ideas for how to `` derive '' spacetime from very general starting conditions , namely a quantized analytical mechanical system . from a few and very reasonable assumptions ,spacetime almost unavoidably appears , with the empirical properties of 3 + 1 dimensionality , reparametrization symmetry - and thereby translational invariance , existence of fields , and practical locality ( though not avoiding the nonlocalities due to quantum mechanics ) .our initial assumption was that the states of the world were very close to a ground state , which in the phase space was argued to typically extend very far in dimensions , while only very shortly in the other dimensions . herethe number of degrees of freedom were called and thus the dimension of the phase .this picture of the ground state in the phase space we called the snake , because of its elongation in some , but not all directions .the long directions of the snake becomes the protospace in our picture .the translation and diffeomorphism symmetry are supposed to come about by first being formally introduced , but spontaneously broken by some `` guendelmann fields '' .it is then argued that this spontaneous breaking is `` fluctuated away '' by quantum fluctuations , so that the symmetry truly appears , in the spirit of lehto - ninomiya - nielsen . at the endwe argued that once having gotten diffeomorphism symmetry , locality follows from simple taylor expansion of the action and the diffeomorphism symmetry .we consider this article as a very significant guide for how the project of random dynamics - of deriving all the known physical laws - could be performed in the range from having quantum mechanics and some smoothness assumptions to obtaining a useful spacetime manifold .first we would like to thank don bennett who initially was a coauthor of the last piece of this work : the derivation of locality ; but then fell ill and could not finish and continue .one of us ( hbn ) wants to thank the niels bohr institute for support as an professor emeritus with office and support of the travels for the present work most importantly to bled where this conference were held .but also dr .breskvar is thanked for economical support to this travel .nielsen , d.l .bennett and n. brene , _ the random dynamics project or from fundamental to human physics _ ,recent developments in quantum field theory ( 1985 ) , ed . j. ambjorn , b.j .durhuus , j.l .petersen , isbn 0444869786 ; h.b .nielsen ( bohr inst . ) , _ random dynamics and relations between the number of fermion generations and the fine structure constants _ , nbi - he-89 - 01 , jan 1989 . 50pp .talk presented at zakopane summer school , may 41 - jun 10 , 1988 . published in acta phys .b20:427 , 1989 ; `` origin of symmetries '' , c. d. froggatt and h.b .nielsen , world scientific ( 1991 ) ; weyl , h. _ space , time , and matter_. dover reprint : 284 ; c. d. froggatt , h.b .nielsen , _ derivation of lorentz invariance and three space dimensions in generic field theory _ , arxiv : hep - ph/0211106 ( 2002 ) .
|
this attempt to `` derive '' space is part of the random dynamics project . the random dynamics philosophy is that what we observe at our low energy level can be interpreted as some taylor tail of the physics taking place at a higher energy level , and all the concepts like numbers , space , symmetry , as well as the known physical laws , emerge from a `` fundamental world machinery '' being a most general , random mathematical structure . here we concentrate on obtaining spacetime in such a random dynamics way . because of quantum mechanics , we get space identified with about half the dimension of the phase space of a very extended wave packet , which we call `` the snake '' . in the last section we also explain locality from diffeomorphism symmetry . = 1
|
with the popularity and accessibility of online social networks ( osns ) , e.g. , facebook , meetup , and skout , more and more people initiate friend gatherings or group activities via these osns .for example , more than 16 millions of events are created on facebook each month to organize various kinds of activities , and more than 500 thousands of face - to - face activities are initiated in meetup .the activities organized via osns cover a wide variety of purposes , e.g. , friend gatherings , cocktail parties , concerts , and marathon events .the wide spectrum of these activities shows that osns have been widely used as a convenient means for initiating real - life activities among friends . on the other hand , to help users expand their circles of friends in the cyberspace , friend recommendation services have been provided in osns to suggest candidates to users who may likely become mutual friends in the future .many friend recommendation services employ link prediction algorithms , e.g. , , to analyze the features , similarity or interaction patterns of users in order to derive potential future friendship between some users . by leveraging the abundant information in osns ,link prediction algorithms show high accuracy for recommending online friends in osns .as social presence theory in social psychology suggests , computer - mediated online interactions are inferior to face - to - face , in - person interactions , off - line friend - making activities may be favorable to their on - line counterparts in cyberspace . therefore ,in this paper , we consider the scenarios of organizing face - to - face friend - making activities via osn services . notice that finding socially cohesive groups of participants is essential for maintaining good atmosphere for the activity .moreover , the function of making new friends is also an important factor for the success of social activities , e.g. , assigning excursion groups in conferences , inviting attendees to housewarming parties , etc .thus , for organizing friend - making social activities , both activity organization and friend recommendation services are fundamental .however , there is a gap between existing activity organization and friend recommendation services in osns for the scenarios under consideration .existing activity organization approaches focus on extracting socially cohesive groups from osns based on certain cohesive measures , density , diameter , of social networks or other constraints , e.g. , time , spatial distance , and interests , of participants ycl11,ysl12,zhx14,syy14 . on the other hand ,friend recommendation services consider only the _ existing friendships _ to recommend potential new friends for an individual ( rather than finding a group of people for engaging friend - making ) .we argue that in addition to themes of common interests , it is desirable to organize friend - making activities by mixing the `` potential friends '' , who may be interested in knowing each other ( as indicated by a link prediction algorithm ) , with existing friends ( as lubricators ) . to the best knowledge of the authors , the following two important factors , 1 ) the existing friendship among attendees , and 2 ) the potential friendship among attendees , have not been considered simultaneously in existing activity organization services .to bridge the gap , it is desirable to propose a new activity organization service that carefully addresses these two factors at the same time . in this paper, we aim to investigate the problem of selecting a set of candidate attendees from the osn by considering both the existing and potential friendships among the attendees .to capture the two factors for activity organization , we propose to include the likelihood of making new friends in the social network . as such, we formulate a new research problem to find groups with tight social relationships among existing friends and potential friends ( i.e. , who are not friends yet ) .specifically , we model the social network in the osn as a heterogeneous social graph with edge weight ] indicates that individuals and are likely to become friends ( the edge weight ] with weight 0.6 , is a potential edge and a solid line , e.g. , , is a friend edge .figure [ h_1 ] shows a group : which has many potential edges and thus a high total weight .however , not all the members of this group have common friends as social lubricators . figure [ h_2 ] shows a group : tightly connected by friend edges . while may be a good choice for gathering of close friends , the goal of friend - making in socialization activities is missed .finally , figure [ h_3 ] shows : which is a better choice than and for socialization activities because each member of is within 2 hops of another member via friend edges in .moreover , the average potential edge weight among them is high , indicating members are likely to make some new friends .processing hmgf to find the best solution is very challenging because there are many important factors to consider , including hop constraint , group size and the total weight of potential edges in a group .indeed , we prove that hmgf is an np - hard problem with no approximation algorithm .nevertheless , we prove that if the hop constraint can be slightly relaxed to allow a small error , there exists a 3-approximation algorithm for hmgf .theoretical analysis and empirical results show that our algorithm can obtain good solutions efficiently . the contributions made in this study are summarized as follows .* for socialization activity organization , we propose to model the existing friendship and the potential friendship in a heterogeneous social graph and formulate a new problem , namely , hop - bounded maximum group friending ( hmgf ) , for finding suitable attendees . to our best knowledge , hmgf is the first problem that considers these two important relationships between attendees for activity organization .* we prove that hmgf is np - hard and there exists no approximation algorithm for hmgf unless .we then propose an approximation algorithm , called maxgf , with a guaranteed error bound for solving hmgf efficiently .* we conduct a user study on users to validate our argument for considering both existing and potential friendships in activity organization .we also perform extensive experiments on real datasets to evaluate the proposed algorithm .experimental results manifest that hmgf can obtain solutions very close to the optimal ones , very efficiently .[ prob ] based on the description of heterogeneous social graph described earlier , here we formulate the _ hop - bounded maximum group friending ( hmgf ) _ tackled in this paper .given two individuals and , let be the shortest path between and via friend edges in .moreover , given , let denote the total weight of potential edges in and let _ average weight _, denote the average weight of potential edges connected to each individual in if . ] .hmgf is formulated as follows .* problem : hop - bounded maximum group friending ( hmgf ) . * * given : * social network , hop constraint , and size constraint .* objective : * find an induced subgraph with the maximum , where and . efficient processing of hmgf is very challenging due to the following reasons : 1 ) the interplay of the total weight and the size of . to maximize ,finding a small may not be a good choice because the number of edges in a small graph tends to be small as well .on the other hand , finding a large ( which usually has a high ) may not lead to an acceptable , either . therefore, the key is to strike a good balance between the graph size and the total weight .2 ) hmgf includes a hop constraint ( say ) on friend edges to ensure that every pair of individuals is not too distant socially from each other .however , selecting a potential edge ] may not necessarily satisfy the hop constraint , i.e. , which is defined based on existing friend edges . in this case, it may not always be a good strategy to prioritize on large - weight edges in order to maximize , especially when and do not share a common friend nearby via the friend edges . in the following ,we prove that hmgf is np - hard and _ not approximable _ within any factor .in other words , there exists no approximation algorithm for hmgf .[ thm_np ] hmgf is np - hard and there is no approximation algorithm for hmgf unless . due to the space constraints , we prove this theorem in the full version of this paper ( available online ) .extracting dense subgraphs or social cohesive groups among social networks is a natural way for selecting a set of close friends for a gathering .various social cohesive measures have been proposed for finding dense social subgraphs , e.g. , diameter , density , clique and its variations .although these social cohesive measures cover a wide range of application scenarios , they focus on deriving groups based only on existing friendship in the social network . in contrast , the hmgf studied in this paper aims to extract groups by considering both the existing and potential friendships for socialization activities .therefore , the existing works mentioned above can not be directly applied to hmgf tackled in this paper .research on finding a set of attendees for activities based on the social tightness among existing friends ycl11,ysl12,zhx14,syy14,sll11 have been reported in the literature .social - temporal group query checks the available times of attendees to find the social cohesive group with the most suitable activity time .geo - social group query extracts socially tight groups while considering certain spatial properties .the willingness optimization for social group problem in selects a set of attendees for an activity while maximizing their willingness to participate .finally , finds a set of compatible members with tight social relationships in the collaboration network . although these works find suitable attendees for activities based on existing friendship among the attendees , they ignore the likelihood of making new friends among the attendees .therefore , these works may not be suitable for socialization activities discussed in this paper .link prediction analyzes the features , similarity or interaction patterns among individuals in order to recommend possible friends to the users .link prediction algorithms employ different approaches including graph - topological features , classification models , hierarchical probabilistic model , and linear algebraic methods .these works show good prediction accuracy for friend recommendation in social networks . in this paper , to estimate the likelihood of how individuals may potentially become friends in the future , we employ link prediction algorithms for deriving the potential edges among the individuals . to the best knowledge of the authors, there exists no algorithm for activity organization that considers both the existing friendship and the likelihood of making new friends when selecting activity attendees .the hmgf studied in this paper examines the social tightness among existing friends and the likelihood of becoming friends for non - friend attendees .we envisage that our research result can be employed in various social network applications for activity organization .we implement hmgf in facebook and invite 50 users to participate in our user study .each user , given 12 test cases of hmgf using her friends in facebook as the input graph , is asked to solve the hmgf cases , and compare her results with the solutions obtained by maxgf .in addition to the user study , we evaluate the performance of maxgf on two real social network datasets , i.e. , fb and the ms dataset from kdd cup 2013 .the fb dataset is extracted from facebook with 90k vertices , and ms is a co - author network with 1.7 m vertices .we extract the friend edges from these datasets and identify the potential edges with a link prediction algorithm .the weight of a potential edge is ranged within ( 0,1 ] .moreover , we compare maxgf with two algorithms , namely , baseline and dks .baseline finds the optimal solution of hmgf by enumerating all the subgraphs satisfying the constraints , while dks is an -approximation algorithm for finding a -vertex subgraph with the maximum density on without considering the potential edges and the hop constraint .the algorithms are implemented in an ibm 3650 server with quadcore intel x5450 3.0 ghz cpus .we measure 30 samples in each scenario . in the following ,fearatio and objratio respectively denote the ratio of feasibility ( i.e. , the portion of solutions satisfying the hop constraint ) and the ratio of in the solutions obtained by maxgf or dks to that of the optimal solution . [ exp_us ]figure [ exp_us ] presents the results of the user study .figure us_time compares the required time for users and maxgf to solve the hmgf instances .users need much more time than maxgf due to challenges brought by the hop constraint and tradeoffs in potential edge weights and the group size , as explained in section [ prob ] . as or grows ,users need more time because the hmgf cases become more complicated .figure [ us_ratio ] compares the solution feasibility and quality among users and maxgf .we employ baseline to obtain the optimal solutions and derive fearatio and objratio accordingly .the fearatio and objratio of users are low because simultaneously considering both the hop constraint on friend edges and total weights on potential edges is difficult for users .as shown , users fearatio and objratio drop when increases .by contrast , maxgf obtains the solutions with high fearatio and objratio . in figure us_satisfaction, we ask each user to compare her solutions with the solutions obtained by maxgf and dks , to validate the effectiveness of hmgf .74% of the users agree that the solution of maxgf is the best because hmgf maximizes the likelihood of friend - making while considering the hop constraint on friend edges at the same time .by contrast , dks finds the solutions with a large number of edges , but it does not differentiate the friend edges and potential edges .therefore , users believe that the selected individuals may not be able to socialize with each other effectively .[ exp_baseline ] baseline can only find the optimal solutions of small hmgf cases since it enumerates all possible solutions .therefore , we first compare maxgf against baseline and dks on small graphs randomly extracted from fb .figure [ baseline_time_v ] compares the execution time of the algorithms by varying the size of input graph .since baseline enumerates all the subgraphs with , the execution time grows exponentially .the execution time of maxgf is very small because the hop - bounded subgraphs and the pruning strategy effectively trim the search space .figures baseline_fea_v and [ baseline_obj_v ] present the fearatio and objratio of the algorithms , respectively .maxgf has high objratio because maxgf iteratively removes vertices with low incident weights from each hop - bounded subgraph , and extracts the solution with maximized among different subgraphs in different to strike a good balance on total edge weights and group sizes as describe in section [ prob ] .moreover , the high fearatio and objratio also indicate that the post - processing procedure effectively restores the hop constraint and maximizes the average weight accordingly .by contrast , dks does not consider the hop constraint and different edge types in finding solutions and thus generates the solutions with smaller fearatio and objratio .[ exp_datasets ] figures [ baseline_time_h]-(f ) compare execution time , fearatio and objratio again but by varying .when increases , the execution time of maxgf grows slowly because the pruning strategy avoids examining the hop - bounded subgraphs that do not lead to a better solution .the fearatio and objratio of maxgf with different are high because maxgf employs hop - bounded subgraphs to avoid generating solutions with large hop distances on friend edges , and the post - processing procedure effectively restores the hop constraint and maximizes the objective function .figure [ exp_datasets ] compares maxgf in different datasets , i.e. , fb and ms .figures [ fea_h ] and [ group_h ] present the fearatio and the solution group sizes with different . as increases , maxgf on both datasets achieves a higher fearatio due to the post - processing procedure adjusts and further minimizes .moreover , it is worth noting that the returned group sizes grow when increases in ms .this is because ms contains large densely connected components with large edge weights .when is larger , maxgf is inclined to extract larger groups from these components to maximize the objective function . by contrast , fb does not have large components and maxgf thereby tends to find small groups to reduce the group size for maximizing the objective function .in fact , the solutions in fb are almost the same with different .finally , maxgf needs to carefully examine possible solutions with the sizes at least , and thus figure [ time_p ] shows that when increases , the execution time drops because maxgf effectively avoids examining the candidate solutions with small group sizes .to bridge the gap between the state - of - the - art activity organization and friend recommendation in osns , in this paper , we propose to model the individuals with existing and potential friendships in osns for friend - making activity organization .we formulate a new research problem , namely , hop - bonded maximum group friending ( hmgf ) , to find suitable activity attendees .we prove that hmgf is np - hard and there exists no approximation algorithms unless .we then propose an approximation algorithm with guaranteed error bound , i.e. , maxgf , to find good solutions efficiently .we conduct a user study and extensive experiments to evaluate the performance of maxgf , where maxgf outperforms other relevant approaches in both solution quality and efficiency .
|
the social presence theory in social psychology suggests that computer - mediated online interactions are inferior to face - to - face , in - person interactions . in this paper , we consider the scenarios of organizing in person friend - making social activities via online social networks ( osns ) and formulate a new research problem , namely , hop - bounded maximum group friending ( hmgf ) , by modeling both existing friendships and the likelihood of new friend making . to find a set of attendees for socialization activities , hmgf is unique and challenging due to the interplay of the group size , the constraint on existing friendships and the objective function on the likelihood of friend making . we prove that hmgf is np - hard , and no approximation algorithm exists unless . we then propose an error - bounded approximation algorithm to efficiently obtain the solutions very close to the optimal solutions . we conduct a user study to validate our problem formulation and perform extensive experiments on real datasets to demonstrate the efficiency and effectiveness of our proposed algorithm .
|
it is not possible to reference all papers on mond .the discussion here follows the historical perspective except in a few places where later papers provide explanations of details in earlier ones .rotation curves of galaxies disagree with newton s laws .the standard cosmological model interprets this in terms of cold dark matter .there is a wide range of proposals what this dark matter may be .however , a long series of experiments has not located significant evidence for it .mond ( modified newtonian dynamics ) is a competing empirical scheme .famaey and mcgaugh have produced a review of the data up to 2011 with extensive references , which will be updated here . by 1983 ,tully and fisher had established that rotation curves of galaxies over a wide range of masses have asymptotic rotational velocities ; is the gravitational constant and an empirical constant .masses are derived assuming is proportional to luminosity .also faber and jackson had studied the velocity dispersion of random motion of stars in high surface brightness elliptical galaxies and shown that .milgrom based his scheme on these observations .he pointed out in his first paper that flatness of rotation curves remains constant down to radii well within galaxies .he parametrised the full rotational acceleration in terms of the newtonian acceleration as is an empirical smooth function of , where is a universal constant m s for all galaxies ; for accelerations and for , .three forms are in common use , and all go to as . from these relations ,a star with rotational velocity in equilibrium with centrifugal force has the factor cancels and , the tully - fisher relation .milgrom s paper comments on the fact that the milky way has the shape of a thin disc , but it is unnecessary to account for motions of stars in the direction , perpendicular to the disc .these were studied by oort .the excursions of stars are small compared with the orbital radius and deviations of velocities from circular motion are small .one can determine and then via poisson s equation find the total gravitational mass density in the central plane or the surface mass density in the disc .milgrom suggested that deviations from newton s law may arise from variations of inertial mass .he was led to this conclusion by the fact that within a star or nucleus , newton s law applies , while on the scale of a galaxy , the tully - fisher relation is required .later , in 1997 , he introduduced the idea that equations of motion are invariant under the conformal transformation in the limit of weak gravity ; radii and accelerations change under scaling , but velocities do not . the derivation is from poisson s equation for a 2-d distribution like a disc galaxy , but is rather technical . in 1984 ,beckenstein and milgrom considered a non - relativisitic potential for gravity differing from newtonian gravity using a lagrangian formalism .they derived a poisson equation from this lagrangian .one result appeared from this work which would later prove to be very important . using the approximation that for small , they derived the result that where is an arbitrary radius .this leads to the asymptotically constant rotational velocity .they also considered the center - of - mass motion of two bodies , an issue which later became important for qumond .further progress was slow while astrophysicists accumulated statistics on galaxies and examined systematics .meanwhile studies were made with collaborators using models of galaxies . in 1996 , milgrom considered the virial equation for theories governed by an action , again laying the foundations for later developments .the same year , milgrom studied low surface density galaxies and accounted for the faber - jackson relation for the mean velocity dispersion of self - gravitating systems supported by random motions .this later proved important in understanding elliptical galaxies . from this pointonwards , various morophologies were explored . in rapidly rotating galaxies ,thin discs evolve .those with lower accelerations develop central bulges .elliptical galaxies rotate slowly as do many dwarf galaxies . in 1997 ,sanders reported work on two sets of data obtained in groningen .he reported on rotation curves of 22 spiral galaxies measured in the 21 cm line of neutral hydrogen , together with 11 galaxies in an earlier selected sample of begeman et al .the mond formula fitted the overall shape and amplitude of the 22 rotation curves .one free parameter was used per galaxy and a second if a bulge was present .a commentary is given on several galaxies and a figure shows fits to 22 galaxies with three components of the fit .in 1998 , de blok and mcgaugh produced data testing mond with low surface brightness galaxies . after some good detective work, they found that rotation curves of 15 galaxies fitted neatly to mond after making small adjustments to the inclination angles of galaxies appearing nearly side - on ; this affected the observed luminosities .their fig .1 illustrates the newtonian contribution from gas and stars and fits after correction for the inclination of the disc .this inclination needs fine tuning within the errors to conform with milgrom s fitting function . in 2000 , mcgaugh et al .explored the tully - fisher relation over 5 decades of stellar masses in galaxies .they recognised the fact that rotational velocities depend on the number of baryons in the galaxy after using observed hi masses for galaxies with large gas content .this is well illustrated in the difference between their figs .1(a ) and 1(b ) .they comment that this direct connection with the number of baryons was an argument against a significant mass in ` dark ' baryons of any form . in 2001 , milgrom grappled with the question how a dark matter halo could describe a thin disc .such discs have large orbital angular momentum .he followed the approach used with beckenstein in 1984 , which derives equations of motion from a lagrangian .mond predicts that the ` dark halo ' needs to have a disc component and a rounder component with radius - dependent flattening , becoming spherical at large radii .he comments that this structure is at odds with what one naturally expects from advertised halo - formation simulations .this is at the heart of the question how different morphologies of galaxies develop in the dark matter scenario .if newton s laws apply to the dark matter halo , it needs to obey a poisson equation .this implies the structure outlined above for a thin disc .it will require different types of halo to describe large elliptical galaxies and dwarf spherical galaxies .that could happen , but looks wierd .it is a question which still continues today : how dark matter can reproduce the observed range of morphologies .the problem is that the parameter does not appear in the dark matter scenario .this is a recurrent question in papers of sanders . in a year 2002paper , milgrom moves away from the asymptotic range of mond .he makes the point that galaxies with high central densities should show no non - newtonian acceleration at small radius .this emerges later as a fundamental point .he also makes the point that , where is the hubble constant and is the velocity of light .a simple explanation of this value will be presented later . in late 2003 ,milgrom and sanders pointed out a ` dearth of dark matter in ordinary elliptical galaxies ' .they refer to new data of romanovsky et al . on three elliptical galaxies . as pointed out by milgrom in his first paper ,the shape of mond rotation curves depends on , where is a measure of the size of the baryonic galaxy ; they take this as the half - mass radius .galaxies with have internal accelerations in their main body , which is thus in the newtonian regime . at the other end ,low surface density galaxies with are in the mond regime throughout .this was the first time the high acceleration regime of galaxies had been probed .line - of - sight dispersions vary slowly , as they show in their fig . 1 . in 2004 ,mcgaugh carried out a careful study of disk galaxies . in a masterly presentation of the data, he outlines what is well determined and what is not .he concentrates on rotationally supported disk galaxies .the essential conclusion is that the invisible dark matter contribution is proportional to the visible number of baryons : the tail wags the dog !data in the upper four panels of his fig .4 are close to scatter plots , whereas the bottom two show a clear correlation with acceleration . on his fig . 5, the light - to - mass ratio is tightest when the mond prescription is used .high surface brightness gives results close to mond .only where the mass - to - light ratio falls is there large scatter .this indicates how baryons are crucial to the interpretation of the data .a continuation of this point appears in a following paper of mcgaugh .he comments that including gas as well as stars , the tully - fisher relation works even for low brightness dwarf spherical galaxies .the basic point is that the tully - fisher relation depends on the acceleration .he shows that cold dark matter halos give a poor fit using a parametrisation close to the navarro , frenk and white parametrisation . in his fig . 1, he shows that mond gives an excellent fit to ngc 2403 of rotation curves and mass distribution .the dark matter prediction is far from the data fitted by mond . in 2007 ,milgrom and sanders presented mond analyses for several of the lowest mass disc galaxies below .they show close fits to rotation curves of 4 galaxies from the work of begum et al .these galaxies are in the deep mond regime , with low accelerations at all radii .they comment that the mond result in such cases is close to a pure prediction as opposed to a one parameter fit .sanders and noordermeer extend the mond analysis to 17 high surface brightness , early - type disc galaxies derived from a combination of 21 cm hi lines observations and optical spectroscopy data .these are data of noordermeer and van der hulst in groningen .fits are close to data and they show the breakdown on velocity of rotation into newtonian , stellar and gas components . in 2008 ,sanders and land made use of data of bolton et al . from the sloan lens survey .whole foreground galaxies function as strong gravitational lenses to produce multiple images ( the `` einstein ring '' ) of background sources .bolton et al .measured the `` fundamental plane '' : an observed relations between effective radius , surface brightness and velocity dispersion , using 36 strongly lensing galaxies .the lensing analysis was combined with spectroscopic and photometric observations of individual lens galaxies , in order to generate a `` more fundamental plane '' based on mass surface density rather than surface brightness .they found that this _ mass - based _ fundamental plane exhibits less scatter and is closer to expectations of the newtonian virial relation than the usual luminosity based on the fundamental plane .the conclusion is that the implied mass / luminosity values within the einstein ring do not require the presence of a substantial component of dark matter .sanders and land show in their figs . 1 and 2 close linear correlations between masses derived from lensing and surface brightness , milgrom himself reviewed the status of mond at that time .he argues that mond predictions imply that baryons alone determine accurately the full field of each and every individual galaxy .he comments that this conflicts with the expectations of the dark matter paradigm because of the haphazard formation and evolution of galaxies and the very different influences that baryons and dark matter are subject to during their evolution , e.g. the very small baryon to dark matter fraction assigned by . in mond ,all physics is predicted to remain the same under a change of units of length , of time and no change in mass units , ; in words , if a certain configuration is a solution of the equations , so is the scaled configuration . likewise ,if is a trajectory where are at , the velocities on that trajectory are ; i.e. a point mass remains a point mass of the same value .another relation is that if , and .so scaling all the masses leaves all trajectories unchanged , but all velocities scale as and accelerations then scale as .the bottom line is that rotation curves of individual galaxies are based only on observed baryonic masses . in 2009 ,stark , mcgaugh and swaters examined the baryonic tully - fisher relation using gas dominated galaxies .they assembled a sample from 7 sources totalling low surface brightness and dwarf galaxies , which have a high percentage of gas .the stellar mass was not zero , so they considered a wide range of stellar population models . since these galaxies are gas dominated , the difference in stellar mass - to - light ratio from the different models had little impact .they were careful to select galaxies with inclinations , i.e. approaching face - on .they checked that observed rotation curves flattened out at large radii within errors .the conclusion is that the exponent in the data is , compared with the predicted value 4 from the tully - fisher relation . in 2010 , milgrom developed a new formulation of mond called qumond .this handles , for example , a large galaxy distorting a small one .the fundamental problem goes back to the work of beckenstein and milgrom in 1984 .it is not clear what the hamiltonian or lagrangian is that controls mond .milgrom follows the idea that the mond potential produced by a mass distribution satisfies the poisson equation for the modified source density .this is an idea which appears to work .he develops the algebra in sections 3 and 4 of his paper .it includes the constraint that the centre of mass motion of a pair of galaxies is correctly described .this deals with the important case of perturbations between a pair of interacting galaxies .the algebra is actually formulated in a general way so that it can in principle cope with interactions between many galaxies . for a large galaxy like the milky way ,the effects of general relativity are at the level of .a following version bimond included the constraints of general relativity , so as to be able to cope with effects on the scale of the universe .these two procedures provide a formalism which becomes valuable when models of galaxies develop further . in 2011 , scarpa et al .made observations of 6 globular clusters .hernandez and jim ' enez and allen report a detailed study of the velocity dispersions of stars at radii of 8 globular clusters . like scarpa et al ., they conclude that tidal effects are significant only at radii larger by factors of 210 than the radius where mond flattens the curves .they also show that the velocity dispersion varies with the mass of the cluster as within errors ; this is the expected analogue of the tully - fisher relation arising from jeans law .this result is independent of luminosity measurements used in interpreting galactic rotation curves . in galaxies ,the mass within a particular radius is not easy to determine , and is usually taken as the mass where rotation curves flatten out .further study of globular clusters is desirable . in 2012, mcgaugh reports an updated analysis of the baryonic tully - fisher relation using 41 gas - rich galaxies .1 shows the observed results compared with mond .mcgaugh reports m s, , where errors covers both statistics and systematics .angus et al . report a useful code for calculating qumond .it solves the poisson equation on a 2-dimensional grid .it uses kuzmin disks as defined by binney and remaine with a surface density .milgrom tested mond over a wide acceleration range in x - ray ellipticals .two galaxies have been measured over very large galactic radii ( and kpc ) assuming hydrostatic balance of the hot gas enshrouding them .measured accelerations span a wide range , from to .he shows two figures comparing the fit to mond with data up to unusually high masses of ; contributions from stars and x - ray gas are shown .he comments that in the context of the dark matter paradigm , it is unexpected that the relation between baryons and dark matter is described so accurately by the same formula that accounts for disc - galaxy dynamics .milgrom himself summarised 10 cardinal points of mond in december 2012 the year 2013 , there have been mounting criticisms of the standard cosmological model .it predicts that galaxies the size of the milky way should be accompanied by roughly isotropic haloes of smaller satellite galaxies formed by random fluctuations of dark matter .this has been known since 1999 . in an article entitled ` where are the missing galactic satellites ? ' , klypin et al .estimated 100 - 300 satellites , depending on radius , see their fig . 4 .moore et al . estimated satellites , see their fig . . in fact, the milky way has satellites and andromeda , our nearest large galaxy , has , where one in each case could be an interloper .a further point is that the satellites are highly correlated in both radial and momentum phase space , rather than being spherically distributed as dark matter predicts .milgrom is insistent that tidal effects of large galaxies have strong effects on their satellites and is probably a key factor in determining their phase space distributions .the best form to use for such calculations is qumond .yet another result comes from l " ughausen et al . who study an unusual type of galaxy called a ` polar ring galaxy ' .the one they study has a small bright gas - poor disc with a central bulge , but in addition an orthogonal gas - rich disc , referred to as a polar disc .there are coriolis forces between the two discs . observed velocities in both discsare well predicted by mond , whereas dark matter predicts a roughly spherical distribution inconsistent with the data .mcgaugh and milgrom present two papers on andromeda dwarfs . in the first paper ,they compare recently published velocity dispersions of stars for 17 andromeda dwarf spheroidals with estimates of mond predictions , based on the luminosities of these dwarfs , with reasonable stellar values and no dark matter .the two are consistent within uncertainties .it is necessary to take account of tidal effects due to the milky way on andremeda dwarf galaxies . for andromeda ,only red giants can be tracked due to distance .they predict the velocity dispersions of another 9 dwarfs for which only photometric data were available . in the second paper test their predictions against new data .results give reasonable stellar mass - to - light ratios , while newtonian dynamics give large mass discrepancies .they comment that mond distinguishes between regimes where the internal field of the dwarf or the external field of the host dominates .the data appear to recognise this distinction , which is a unique feature of mond , not explicable in .there is a major result from milgrom .he has studied weak gravitational lensing of galaxies using data of brimioulle et al .they examined foreground galaxies illuminated by a diffuse background of distant galaxies .they remove signals from the centres of foreground galaxies so that their edges and haloes can be studied .their objective was to study the dark halos of galaxies .an elementary prediction of mond is that the asymptotic form of the curvature leads to a logarithmic tail to the newtonian potential ; here is the mean radius for this term .this tail lowers the zero point energy of the newtonian acceleration .milgrom transforms this equation into the variables used by brimioulle et al .i myself have checked his algebra and arithmetic and agree .milgrom shows that their results obey mond predictions accurately over a range of accelerations m s .averaged over this range , results are a factor larger than predicted by conventional dark matter haloes surrounding galaxies . fig .2 shows the ratio of observed acceleration to newtonian acceleration as a function of .the curve is well known experimentally over the range milgrom uses , but is less reliable beyond this . at the peak acceleration ,the effect is larger than dark matter predicts by a factor .the unavoidable conclusion is that the standard model needs serious modification .-13 mm -8 mm fig .3 displays milgrom s fit to the data of brimioulle et al .the mond predictions for baryonic mass - to - light ratios 1,1.5 , 3 and 6 .the measurements are reproduced from fig .28 of brimioulle et al .triangles mark red galaxies and squares blue galaxies .there is a difference in absolute magnitudes of velocity dispersions , due to different luminosities of red and blue galaxies .-10 mm -2 mm a corollary follows from milgrom s fit to the data of brimioulle et al . which agree with mond , but are far from the prediction of . herethe photons come from distant galaxies .the baryonic acoustic oscillations are likewise carried by photons , which in this case originate from the cosmic microwave background .surely these most be treated in the same way . the conventional assumption made in the work of schmittful et al . is that photons from the cosmic microwave background are bent in weak gravitational lensing only by newtonian dynamics ( including small corrections for general relativity ) .however , since mond fits the data of brimioulle __ but does not by a large margin , the astrophysics community should be alert to the fact that an additional energy originates from integrating the acceleration ; is the radius where the acceleration is .this is needed over the range of accelerations where mond explains the gravitational rotation curves , fig .it is not presently included in the fit to the baryonic acoustic oscillations .some form factor will be needed at intergalactic distances beyond the range explored by brimioulle et al .for very large , the red shift gradually suppresses the logarithmic term because the universe was younger when light was emitted . at small ,the mass in the formula falls in a way which needs to be fitted empirically .a criticism levelled at mond is that it does not fit accurately the third peak , so treating the baryonic acoustic oscillations correctly is of prime importance .two recent papers of sanders make valuable reading . in the second, he argues that ` mond , as a theory , is inherently falsifiable ' and is not , because of the flexibility in the way data are fitted .mond has the merit that it gives a specific distribution of accelerations depending on just one parameter over the range where rotation curves of galaxies deviate significantly from newton s laws .up to this point , the discussion has largely concerned the peripheries of galaxies and the approach to the asymptotic limit of the acceleration . in ref . , a completely different viewpoint is developed , from which the conclusion is that quantum mechanics plays a fundamental role in forming galaxies . for astrophysiciststhis is an unfamiliar idea .however , from a particle physics viewpoint it is simple and precise .particle physics governs atomic and particle physics .it plays a fundamental role in forming black holes .why not galaxies too ? from a particle physics perspective , the natural way to express gravitation is in terms of quantised gravitons . before plunging into this story ,it is necessary to correct one unfortunate piece of wording in the article .it refers to the hubble acceleration .in fact the hubble constant has dimensions of velocity .this has no effect on the algebra or results .the procedure used in ref . is to adopt commonly used forms of milgrom s function to determine the non - newtonian component of the acceleration observed at the edges of galaxies .this peaks at or close to where it is bigger than by a large factor . this acceleration is then integrated to determine the associated energy function .it turns out that the result fits naturally to a fermi function with the same negative sign as gravity .it can be interpreted as an interaction between gravitons and nucleons ( or electrons ) .fitting functions are available from the author .this fermi function lowers the total energy by at radius where reaches ; here is the mass within radius .it represents an energy gap like those observed in doped semi - conductors and superconductors .there is information over the whole range of accelerations from m s upwards ; at the lowest point the acceleration is almost purely newtonian .results of this approach are shown in fig .data are shown on a log - log plot .there are three functions in common use for the function used for mond .they are illustrated in fig .19 of the review of famaey and mcgaugh .the smoothest form , given by milgrom is where . from algebra given in eqns .( 7)-(9 ) of ref . , the result is this gives the full curve of fig .its curvature is a measure of the additional acceleration .it is straightforward to derive a formula for ( see ) ; evaluating it gives the curve shown on fig .it peaks at m s .it can be approximated by a gaussian which drops to half - height at of the value of at the peak .the conclusion is that galaxies have considerable stability .note that this conclusion applies to galaxies of all sizes in view of the scaling relation used by milgrom .the point of interest is that on fig .4(a ) , there appears to be a cross - over between newtonian acceleration at low and another regime at large .asymptotically , the total acceleration , taken from mond , is .taking this as , where is a potential induced by the ` extra ' acceleration here is the mean radius of this term . because , is very small .however , it does explain the asymptotic straight - line at the right - hand edge of fig .. it also explains the long range tail observed by brimioulle et al .there is an ` extra ' energy in addition to newtonian energy .this is obtained by integrating the ` extra ' acceleration numerically . from fig .4(a ) , ; , so .the appropriate equation is describes the extra energy and is the total energy . in a quantum mechanical situation, there is a companion equation these are a coupled pair of equations with two solutions this equation was first derived in 1931 by breit and rabi .the same formalism describes mixing between the three neutrinos , and and also the ckm matrix of qcd and cp violation in decays of mesons . for galaxies , classical expectation values and are to be substituted into eq .the two solutions of the breit - rabi equation are a clearer picture of what is happening is obtained by rotating fig .4(a ) clockwise by ; this is the mean angle of the dashed line to the -axis , , and the angle of the dotted line .the rotation of axes is the bogoliubov - valatin transformation , first discovered in an obscure phenomenon in nuclear structure physics by bogoliubov and valatin . the upper curve in fig .5 shows the result .this equation arises in quantum mechanics whenever two energy levels cross as a function of .the separation between the energy levels depends on the degree of mixing governed by .6 shows the rotation of axes ; it is about the point , , where the two straight lines of fig .1(a ) cross : substituting eq .( 10 ) gives an exact expression for the curve in axes .the full solution of the breit - rabi equation is given in section 3.2 of ref. .it is given by where is given by the standard form of the fermi function : ^{-1 } ; \ ] ] is the energy at the centre of the fermi function and is a fitted constant .the value of describes the asymptotic variation of energy and is given by .the depth of the fermi function is .the magnitude of this term can be traced to the factor 2 difference in slope of and that of the asymptotic form .a bose - einstein condensate does not fit the data , hence demonstrating that the condensate is not in the gravitational field itself .let us now return to fig .here there is a slight complication .4(b ) is the acceleration measured in axes .however , in axes the curvature increases by a factor 1.48 .in addition , there is a small visible displacement of the centre of curvature in fig .4(a ) by an offset of in .what then emerges from the breit - rabi equation is that is rather small near compared with that originating from the extra acceleration .this second term dominates by a large factor at the centre of the curve .this ratio falls by at , to 30 at , then -6.0 at and at .results for the ` extra ' acceleration are symmetric about except for the term of eq .the conclusion is that the curved part of fig .4(a ) is the dominant feature near where newtonian gravitation is a rather small perturbation .consider the effect of this result near the centre of the fermi function at in fig .if we retain only the dominant terms in and , apart from the factor , may be interpreted as the modulus of a breit - wigner resonance with -dependent width : the energy starts at zero because there is no difference between newtonian energy and total energy at the top of fig .3(c ) , and its central energy is shifted downwards by . in ref . , the effect of using alternative forms of milgrom s function were tested .although acceleration curves change significantly , as shown by the dotted curve of fig .4(c ) , the fermi function is affected only at the ends of the range to by at most .the fermi function acts as a funnel , attracting gas and dust into the periphery of the galaxy .the shape of the breit - wigner can be alternatively expressed as an airy function with a modest form factor .it comes from the coherent interactions of gravitons with nucleons over a large volume .the form factor can arise for example from supernovae which heat considerable volumes .such effects resemble defects like those observed in superconductors .what about the missing lower branch of the breit - rabi equation ? on this branchboth and change sign .the change of sign requires that this branch describes an excited state rather than a condensate .( remember that energies of both gravity and are negative ) .such an excited state is likely to decay on a time scale much less than that of galaxies , so this branch has not been observed .it is interesting that phenomena appear on a log - log scale .the logarithmic dependence has a natural explanation in terms of the statistical mechanics of the interaction between gravitons and nucleons .schr " odinger shows in a delightfully simple approach that quantum mechanics requires the logarithmic dependence for both fermi - dirac statistics and bose - einstein .this is further direct evidence for quantum mechanics at work , since it is a purely quantum phenomenon .a fit using bose - einstein statistics fails completely to fit galactic data .a prediction is that in voids there will be no fermi function lowering the energy .it is observed that many large galaxies appear at the edge of the local void .this can occur by attracting gas and dust out of the neighbouring void .the converse occurs in clusters of galaxies .each galaxy in a cluster has a fermi function and this results in complex interferences between individual galaxies in the cluster and general lowering of the energy .this may account for the fact that mond falls short by a factor of 2 in predictions of accelerations in galaxy clusters . in ref . a further conjecture is made about a connection to dark energy .experiment tells us that in galaxies , the asymptotic form of the acceleration is .this leads to the question : what governs the asymptotic acceleration ?if mond successfully models the formation of galaxies and globular clusters , it raises the question of how to interpret dark energy . in a de sitter universe ,the friedmann - lema^ itre - robertson - walker model ( flrw ) smoothes out structures using a function which models the gross features .my suggestion is that galaxies create fine - structure and the flrw model should include into dark energy the sum over these structures .this sum increases as galaxies grow in the recent past .it has the potential to account for the late - time acceleration of the universe .this remains to be tested .the way in which the bogoliubov - valatin transformation works in nuclei has been reviewed recently in a paper of ring .it is intricate , but is well described by ring .it depends on the spontaneous violation of symmetries such as rotational symmetry in deformed nuclei and the gauge theory in superfluid systems .the phenomenon is called ` backbending ' . in nuclei ,more than one basis state exists and there are towers of resonances separated by two units of spin , made from each of these basis states . at large excitationsthey cross in a similar way to the two regimes in fig .7(a ) here .basically what happens is that the excited states can decay via emission of photons and this damps the excited states as a function of energy , see fig . 2 of .there is then quantum mechanical mixing amongst the excited states .99 b. famaey and s.s .mcgaugh , arxiv : 1112.3960 .tully and j.r .fisher , astron .astrophys . * 54 * 661 ( 1977 ) .faber and r.e .jackson , astroph , j * 204 * 668 ( 1976 ) .m. milgrom , astrophys .j * 270 * 371 ( 1983 ) .oort ( 1965 ) , stars and stellar systems , vol . * 5 * , galactic structures , ed .a blaauw and m. schmidt , ( university of chicago press , chicago ) , p 485 .m. milgrom , phys .e * 56 * 1148 ( 1997 ) .j. beckenstein and m. milgrom , astrophys .j * 286 * 7 ( 1984 ) .m. milgrom , phys .a * 190 * 17 ( 1994 ) .m. milgrom , arxiv : astro - ph/9601080 .sanders , astrophys j * 480 * 492 ( 1997 ) .begeman , a.h .broeils and r.h .sanders , mnras , * 249 * 523 ( 1991 ) .w.j.g . de blok and s.s .mcgaugh , astroph .j * 499 * 66 ( 1998 ) .mcgaugh , j.m .shombert , .d .bothun and w.j.g .de blok , astroph .j. * 533 * l99 ( 2000 ) .m. milgrom , mnras * 326 * 1261 ( 2001 ) .m. milgrom , new astron . rev . * 46 * 741 ( 2002 ) .m. milgrom and r.a .sanders , astrophys .j * 599 * , l25 ( 2003 ) .a.j . romanowsky _et al . _ ,science , * 301 * 1696 ( 2003 ) .mcgaugh , astrophys .j * 609 * 652 ( 2004 ) .mcgaugh , invited review for the 21st iap colloquium : mass profiles of cosmological structures , eds .g. mamon , f. combes , c. deffayet , and b. fort .navarro , c.s .frenk and s.d.m .white , astrophys .j * 490 * 493 ( 1997 ) .m. milgrom and r.h .sanders , astrophys .lett . * 658 * l17 ( 2007 ) .a. begum , j. chengalur and i.d .karachentsev astron astrophys .* 433 * l1 ( 2005 ) .sanders and e. noordermeer , astrophys .* 658 * , l17 ( 2007 ) .e. noordermeer and j.m .van der hulst , arxiv : astro - ph/0611494 .sanders and d.d .land , mnras * 389 * 701 ( 2008 ) .j * 665 * l105 ( 2007 ) .m. milgrom , arxiv : 0801.3133 .. stark , s.s .mcgaugh and r.a .swaters , astronomical journal * 138 * , issue 2 , 392 ( 2009 ) .m. milgrom , mnras * 405 * , 1129 ( 2010 ) .m. milgrom , phys .d * 82 * , 043523 ( 2010 ) .r. scarpa _ et al ._ , astron . astrophys .* 525 * a148 ( 2011 ) .x. hernandez and m.a .jim ' enez , astrophys .j * 750 * 9 ( 2012 ) .x. hernandez , m.a .jim ' enez and c. allen , mnras * 428 * 3196 ( 2013 ) .mcgaugh , astrophys .j * 143 * 40 ( 2012 ) .et al . _ ,mnras * 421 * 2598 ( 2012 ) . m. milgrom , phys . rev .lett . * 109 * 131101 ( 2012 ) .m. milgrom , mnras * 437 * 2531 ( 2013 ) .p. kroupa , m. pawlowski and m. milgrom , arxiv : 1301.3907 . b. famaey and s.s .mcgaugh , arxiv : 1301.0623 .klypin , a.v .kravtsov , o. valenzuela and f. prada , astrophys .j * 522 * 82 ( 1999 ) .j * 524 * l19 ( 1999 ) ._ , astrophys .j * 768 * 172 ( 2013 ) .m. milgrom , mnras * 403 * 886 ( 2010 ) .f. l " ughausen _ et al ._ arxiv : 1304.4931 . s. mcgaugh and m. milgrom , astrophys .j * 766 * 22 ( 2013 ) .s. mcgaugh and m. milgrom , astrophys .j * 775 * 139 ( 2013 ) .m. milgrom , phys .111 * 041105 ( 2013 ) .f. brimioulle , s. seitz , m. lerchster , r. bender and j. snigula , mnras * 432 * 1046 ( 2013 ) .schmittful , a. challenor , d. hanson and a. lewis , phys .d * 88 * 0639012 ( 2013 ) .sanders , arxiv : 1310.6148 .sanders , arxiv : 1311.1744 .bugg , canadian journal of physics , cjp-2013 - 0163 ( 2013 ) .g. breit and i.i .rabi , phys . rev . * 38 * 2082 ( 1931 ) .bogolubov , j. exptl .( u.s.s.r ) * 34 * 50,73 ( 1958 ) ; translation : soviet phys .jetp * 34 * 41 , 51 . j.g .valatin , nu . cim . * 7 * 843 ( 1958 ) .e. schr " odinger , _ statistical thermodynamics _ , cambridge university press , edition ( 1952 ) .peebles and a. nusser , nature * 465 * 565 ( 2010 ) .p. ring , arxiv : 1204.2681 .
|
a critical appraisal is presented of developments in mond since its introduction by milgrom in 1983 to the present day . * mond - a review * + department of physics , queen mary , university of london , london e14ns , uk + -1 mm
|
this paper and its companion paper ii study dynamics , i.e. , dynamics that conserves the potential energy for a system of classical particles at constant volume . dynamics is deterministic and involves only the system s configurational degrees of freedom . dynamics is characterized by the system moving along a so - called _ geodesic _ curve on the constant potential - energy hypersurface defined by mathematically , is a dimensional differentiable manifold . since it is imbedded in , has a natural euclidean metric and it is thus a so - called riemannian manifold .the differential geometry of hypersurfaces is discussed in , for instance , ref . .a geodesic curve on a riemannian manifold by definition minimizes the distance between any two of its points that are sufficiently close to each other ( the curve is characterized by realizing the `` locally shortest distance '' between points ) . more generally , a geodesic is defined by the property that for any curve variation keeping the two end points and fixed , to lowest order the curve length does not change , i.e. , here denotes the line element of the metric . from a physical point of viewit is sometimes useful to regard a geodesic as a curve along which the system moves at constant velocity with zero friction .such motion means that at any time the force is perpendicular to the surface , and because the force performs no work , the kinetic energy is conserved . in this waygeodesic motion generalizes newton s first law , the law of inertia , to curved surfaces .the concept of geodesic motion is central in general relativity , where motion in a gravitational field follows a geodesic curve in the four - dimensional curved space - time .a general motivation for studying dynamics is the following .since all relevant information about a system is encoded in the potential - energy function , it is interesting from a philosophical point of view to study and compare different dynamics relating to .the `` purest '' of these dynamics does not involve momenta and relates only to configuration space . dynamics provides such a dynamics .in contrast to brownian dynamics , which also relates exclusively to the configurational degrees of freedom , dynamics is deterministic . may be viewed as an attempt to understand the dynamic implications of the potential energy landscape s geometry along the lines of recent works by stratt and coworkers .our interest in dynamics originated in recent results concerning strongly correlating liquids and their isomorphs .a liquid is termed strongly correlating if there is more than 90% correlation between its virial and potential energy thermal equilibrium fluctuations in the ensemble .the class of strongly correlating liquids includes most or all van der waals and metallic liquids , whereas hydrogen - bonding , covalently bonded liquids , and ionic liquids are generally not strongly correlating .a liquid is strongly correlating if and only if it to a good approximation has `` isomorphs '' in its phase diagram . by definitiontwo state points are isomorphic if any two microconfigurations of the state points , which can be trivially scaled into one another , have identical canonical probabilities ; an isomorph is a curve in the phase diagram for which any two pairs of points are isomorphic .only inverse - power - law liquids have exact isomorphs , but simulations show that lennard - jones type liquids have isomorphs to a good approximation .this is consistent with these liquids being strongly correlating .many properties are invariant along an isomorph , for instance the excess entropy , the isochoric heat capacity , scaled radial distribution functions , dynamic properties in reduced units , etc ; the reduced - unit constant - potential - energy hypersurface is also invariant along an isomorph . given that several properties are invariant along a strongly correlating liquid s isomorphs and that is invariant as well , an obvious idea is that s invariance is the fundamental fact from which all other isomorph invariants follow .for instance , the excess entropy is the logarithm of the area of , so the excess entropy s isomorph invariance follows directly from that of . in order to understand the dynamic isomorph invariants from the perspectivea dynamics is required that refers exclusively to .one possibility is diffusive dynamics , but a mathematically even more elegant dynamics on a differentiable manifold is that of geodesics .although these considerations were our original motivation , it should be emphasized that that the concept of geodesic motion on ( or ) is general and makes sense for any classical mechanical system , strongly correlating or not .we are not the first to consider dynamics on the constant - potential - energy hypersurface . in papers dating back to 1986 cotterill andmadsen proposed a deterministic constant - potential - energy algorithm similar , but not identical , to the basic algorithm derived below .their algorithm was not discussed in relation to geodesic curves , but aimed at providing an alternative way to understanding vacancy diffusion in crystals and , in particular , to make easier the identification of energy barriers than from ordinary md simulations .the latter property is not confirmed in the present papers , however we find that dynamics in the thermodynamic limit becomes equivalent to standard dynamics ( paper ii ) .later scala _ et al . _ studied diffusive dynamics on the constant - potential - energy hypersurface , focusing on the entropic nature of barriers by regarding these as `` bottlenecks '' .this point was also made by cotterill and madsen who viewed as consisting of `` pockets '' connected by thin paths , referred to as `` tubes '' , acting as entropy barriers .reasoning along similar lines , stratt and coworkers published in 2007 and 2010 three papers , which considered paths in the so - called potential - energy - landscape ensemble .this novel ensemble is defined as including all configurations with potential energy less than or equal to some potential energy .a geodesic in the potential - energy - landscape ensemble consists of a curve that is partly geodesic on the constant - potential - energy surface , partly a straight line in the space defined by .s picture shifts perspective from finding stationary points on the potential energy landscape to finding and characterizing the accessible pathways through the landscape . within this perspective pathwayswould be slow , not because they have to climb over high barriers , but because they have to take a long and tortuous route to avoid such barriers .... " .thus the more `` convoluted and laborinthine '' the geodesics are , the slower is the dynamics .apart from these three sources of inspiration to the present work , we note that geodesic motion on differentiable manifolds has been studied in several other contexts outside of pure mathematics , see , e.g. , ref . .the present paper derives and documents an algorithm for geodesic dynamics . in sec .ii we derive the basic algorithm . by constructionthis algorithm is time reversible , a feature that ensures a number of important properties .section iii discusses how to implement the algorithm and tests improvements of the basic algorithm designed for ensuring stability , which is done by single - precision simulations .this section arrives at the final algorithm and demonstrates that it conserves potential energy , step length , and center - of - mass position in arbitrarily long simulations .section iv briefly investigates the sampling properties of the algorithm , showing that it gives results for the lennard - jones liquid that are equivalent to those of standard dynamics .finally , sec .v gives some concluding comments .paper ii compares simulations to results for four other dynamics , concluding that dynamics is a fully valid molecular dynamics .for simplicity of notation we consider in this paper only systems of particles of identical masses ( appendix a of paper ii generalizes the algorithm to systems of varying particle masses ) .the full set of positions in the -dimensional configuration space is collectively denoted by , i.e. , likewise , the full -dimensional force vector is denoted by . this section derives the basic algorithm for geodesic motion on the constant - potential - energy hypersurface defined in eq .( [ omega_def ] ) , an algorithm that allows one to compute the positions in step , , from and .although a mathematical geodesic on a differentiable manifold is usually parameterized by its curve length , it is useful to think of a geodesic curve on as parameterized by time , and we shall refer to the steps of the algorithm as `` time steps '' . locally , a geodesic is the shortest path between any two of its points . more precisely : 1 ) for any two points on a riemannian manifold the shortest path between them is a geodesic ; 2 ) the property of a curve being geodesic is locally defined ; 3 ) a geodesic curve has the property that for any two of its points , which are sufficiently close to each other , the curve gives the shortest path between them .a geodesic may , in fact , be the _longest _ distance between two of its points .for instance , the shortest and the longest flight between two cities on our globe both follow great circles these are both geodesics . in any case, the property of being geodesic is always equivalent to the curve length being _ stationary _ in the following sense : small curve variations , which do not move the curve s end points , to lowest order lead to no change in the curve length . for motion on the constraint of constant potential energyis taken into account by introducing lagrangian multipliers . for each time step is the constraint and a corresponding lagrangian multiplier .thus the stationarity condition eq .( [ geod ] ) for the discretized curve length subject to the constraint of constant potential energy , is since and the -dimensional force is given by , putting to zero the variation of eq .( [ nvu2 ] ) with respect to ( i.e. , the partial derivative ) leads to to solve these equations we make the ansatz of constant displacement length for each time step , ) corresponds to constant velocity in the geodesic motion . with this ansatz eq .( [ nvu3 ] ) becomes if and , eq .( [ nvu4 ] ) implies , i.e. , . since eq .( [ nvu5 ] ) expresses that is parallel to , one concludes that is perpendicular to .this implies taking the dot product of each side of eq .( [ nvu5 ] ) with one gets which via eq . ( [ nvu6 ] ) implies substituting this into eq .( [ nvu5 ] ) and isolating we finally arrive at {{\bf f}}_i/{{\bf f}}_i^2\,.\ ] ] this equation determines a sequence of positions ; it will be referred to as `` the basic algorithm '' .the algorithm is initialized by choosing two nearby points in configuration space with the same potential energy within machine precision . the derivation of the basic algorithm is completed by checking its consistency with the constant step length ansatz eq .( [ nvu4 ] ) : rewriting eq .( [ nvu1 ] ) as {{\bf f}}_i/{{\bf f}}_i^2 ] . thus the solution is consistent with the ansatz .time reversibility of the basic algorithm is checked by rewriting eq .( [ nvu1 ] ) as follows {{\bf f}}_i/{{\bf f}}_i^2\,,\ ] ] which via eq .( [ nvu6 ] ) implies {{\bf f}}_i/{{\bf f}}_i^2\,.\ ] ] comparing to eq .( [ nvu1 ] ) shows that any sequence of configurations generated by eq .( [ nvu1 ] ) obeys eq .( [ nvu1 ] ) in the time - reversed version . a more physical way to show that the basic algorithm is time - reversal invariant is to note that eq .( [ nvu3 ] ) is itself manifestly invariant if the indices and are interchanged .appendix a shows that the basic algorithm is symplectic , i.e. , that it conserves the configuration - space volume element in the same way as dynamics does .we finally consider potential - energy conservation in the basic algorithm .a taylor expansion implies via eq .( [ nvu6 ] ) that this ensures potential - energy conservation to a good approximation if the discretization step is sufficiently small .the `` potential energy contour tracing '' ( pect ) algorithm of cotterill and madsen is the following : {{\bf f}}_i/{{\bf f}}_i^2 ] .writing in which with , we get +(1+\delta_i)\left[u_{i-1}-{{\rm u}}\right] ] .this implies again in summary , for simulations of indefinite length the algorithm eq .( [ tt ] ) ensures constant step length and avoids entropic drift of the potential energy .figure [ drift_new](a ) shows the evolution of the potential energy using the basic algorithm ( red ) and the final algorithm ( black ) , fig .[ drift_new](b ) shows the analogous step length evolution .figure [ corr](a ) shows that the distribution of the lagrangian multiplier is only slightly affected by going from the basic ( red ) to the final ( black ) algorithm .figure [ corr](b ) shows the evolution of in the final algorithm , which as expected is close to zero .( a ) the distribution of the lagrangian multiplier times with ( black ) and without ( red ) the numerical stabilization of the final algorithm eq .( [ tt ] ) .( b ) evolution of the quantity defined by ; as expected this quantity is small and averages to zero.,title="fig:",width=264 ] ( a ) the distribution of the lagrangian multiplier times with ( black ) and without ( red ) the numerical stabilization of the final algorithm eq .( [ tt ] ) .( b ) evolution of the quantity defined by ; as expected this quantity is small and averages to zero.,title="fig:",width=264 ] we remind the reader that the modifications were introduced to compensate for the effects of accumulating random numerical errors for very long runs , and that the modifications introduced in the final algorithm eq . ( [ tt ] ) vanish numerically in the mean .the prize paid for stabilizing the basic algorithm is that the full algorithm is not time reversible . in view of the fact that the improvements introduced to ensure stability lead to very small corrections , the ( regrettable ) fact that the corrections violate time reversibility is not important .radial distribution functions for a single component lennard - jones system at the following state points : ( a ) and ; ( b ) and ; ( c ) the crystal at and .the black curves show results from simulations , the red dots from simulations ( eq .( [ tt])).,title="fig:",width=264 ] radial distribution functions for a single component lennard - jones system at the following state points : ( a ) and ; ( b ) and ; ( c ) the crystal at and .the black curves show results from simulations , the red dots from simulations ( eq .( [ tt])).,title="fig:",width=264 ] radial distribution functions for a single component lennard - jones system at the following state points : ( a ) and ; ( b ) and ; ( c ) the crystal at and .the black curves show results from simulations , the red dots from simulations ( eq .( [ tt])).,title="fig:",width=264 ] in order to investigate whether the algorithm gives physically reasonable results we compare results from and simulations for the average of a quantity that depends only on configurational degrees of freedom .this is done in fig .[ sampling ] , which shows the radial distribution function at three state points .the red dots give simulation results , the black curve .clearly the two algorithms give the same results .this finding is consistent with the conjecture that the algorithm probes all points on with equal probability .note that this is not mathematically equivalent to conjecturing that the algorithm probes the configuration space microcanonical ensemble , which has equal probability density everywhere in a thin energy shell between a pair of constant - potential - energy manifolds .the latter distribution would imply a density of points on proportional to the length of the gradient of ( the force ) , but this distribution can not be the correct equilibrium distribution because the basic algorithm eq .( [ nvu1 ] ) is invariant to local scaling of the force . in the thermodynamic limit, however , the length of the force vector becomes almost constant and the difference between the configuration space microcanonical ensemble and the equal - measure ensemble becomes insignificant .paper ii details a comparison of dynamics to four other dynamics , including two stochastic dynamics . hereboth simulation and theory lead to the conclusion that and dynamics are equivalent in the thermodynamic limit .an algorithm for geodesic motion on the constant - potential - energy hypersurface has been developed ( eq .( [ tt ] ) ) .single - precision simulations show that this algorithm , in conjunction with compensation for center - of - mass drift , is absolutely stable in the sense that potential energy , step length , and center - of - mass position are conserved for indefinitely long runs .the algorithm reproduces the radial distribution function of the lj liquid , strongly indicating that correct configuration - space averages are arrived at in dynamics .although dynamics has no kinetic energy providing a heat bath , it does allow for a realistic description of processes that are unlikely because they are thermally activated with energy barriers that are large compared to ( paper ii ) . in dynamics ,whenever a molecular rearrangement requires excess energy to accumulate locally , this extra energy is provided by the surrounding configurational degrees of freedom .these provide a heat bath in much the same way as the kinetic energy provides a heat bath for standard newtonian dynamics .the companion paper ( ii ) compares the dynamics of the kob - andersen binary lennard - jones liquid simulated by the algorithm and four other algorithms ( , , diffusion on , monte carlo dynamics ) , concluding that results are equivalent for the slow degrees of freedom .paper ii further argues from simulations and non - rigorous argumens that dynamics becomes equivalent to dynamics as .useful input from nick bailey is gratefully acknowledged .the centre for viscous liquid dynamics `` glass and time '' is sponsored by the danish national research foundation ( dnrf ) .this appendix proves that the basic algorithm conserves the configuration - space volume element on the hypersurface in the same sense as the algorithm conserves the configuration - space volume element .we view the basic algorithm ( eq .( [ nvu1 ] ) ) , u. r. pedersen , n. p. bailey , t. b. schrder , and j. c. dyre , phys .lett . * 100 * , 015701 ( 2008 ) ; u. r. pedersen , t. christensen , t. b. schrder , and j. c. dyre , phys .e * 77 * , 011201 ( 2008 ) ; t. b. schrder , u. r. pedersen , n. p. bailey , s. toxvaerd , and j. c. dyre , phys . rev .e * 80 * , 041502 ( 2009 ) ; n. p. bailey , u. r. pedersen , n. gnan , t. b. schrder , and j. c. dyre , j. chem .phys . * 129 * , 184507 ( 2008 ) ; n. p. bailey , u. r. pedersen , n. gnan , t. b. schrder , and j. c. dyre , j. chem .phys . * 129 * , 184508 ( 2008 ) ; t. b. schrder , n. p. bailey , u. r. pedersen , n. gnan , and j. c. dyre , j. chem . phys . * 131 * , 234503 ( 2009 ) ; u. r. pedersen , t. b. schrder , and j. c. dyre , phys . rev . lett .* 105 * , 157801 ( 2010 ) .n. gnan , t . b. schrder , u. r. pedersen , n. p. bailey , and j. c. dyre , j. chem . phys .* 131 * , 234504 ( 2009 ) ; n. gnan , c. maggi , t .b. schrder , and j. c. dyre , phys .lett . * 104 * , 125902 ( 2010 ) ; t .b. schrder , n. gnan , u. r. pedersen , n. p. bailey , and j. c. dyre , j. chem . phys . * 134 * , 164505 ( 2011 ) . r. m. j. cotterill and j. u. madsen , phys .b * 33 * , 262 ( 1986 ) ; r. m. j. cotterill and j. u. madsen , in _ characterizing complex systems _ , ed .h. bohr ( world scientific , singapore , 1990 ) , p. 177 ; j. li , e. platt , b. waszkowycz , r. cotterill , and b. robson , biophys .chem . * 43 * , 221 ( 1992 ) ; r. m. j. cotterill and j. u. madsen , j. phys .: condens .matter * 18 * , 6507 ( 2006 ) .a. scala , l. angelani , r. di leonardo , g. ruocco , and f. sciortino , phil .b * 82 * , 151 ( 2002 ) ; l. angelani , r. di leonardo , g. ruocco , a. scala , and f. sciortino , j. chem .phys . * 116 * , 10297 ( 2002 ) .v. caselles , r. kimmel , and g. sapiro , int . j. comput22 * , 61 ( 1997 ) ; r. kimmel and j. a. sethian , proc .usa * 95 * , 8431 ( 1998 ) ; j. a. sethian , _ level set methods and fast marching methods _ ( cambridge univ . press , cambridge , 1999 ) ; l .- t .cheng , p. burchard , b. merriman , and s. osher , j. comput .phys . * 175 * , 604 ( 2002 ) ; a. rapallo , j. chem . phys . * 121 * , 4033 ( 2004 ) ; l. ying and e. j. candes , j. comput . phys . *220 * , 6 ( 2006 ) ; a. rapallo , j. comput . chem . * 27 * , 414 ( 2006 ) ; a. spira and r. kimmel , j. comput .phys . * 223 * , 235 ( 2007 ) ; h. schwetlick and j. zimmer , j. chem .phys . * 130 * , 124106 ( 2009 ) . j. e. marsden and m. west , acta numer .* 10 * , 357 ( 2001 ) ; r. elber , a. cardenas , a. ghosh , and h. a. stern , adv .phys . * 126 * , 93 ( 2003 ) ; a. lew , _variational time integrators in computational solid mechanics _thesis , california institute of technology ( 2003 ) ; c. g. gray , g. karl , and v. a. novikov , rep . prog . phys .* 67*,159 ( 2004 ) ; a. lew , j. e. marsden , m. ortiz , and m. west , int . j. numerengng * 60 * , 153 ( 2004 ) ; m. west , _ variational integrators _ , ph.d .thesis , california institute of technology ( 2004 ) ; t. j. bridges and s. reich , j. phys a. * 39 * , 5287 ( 2006 ) ; e. hairer , c. lubich , and g. wanner , _ geometric numerical integration - structure - preserving algorithms for ordinary differential equations _ , 2nd ed .( springer , berlin , 2006 ) ; r. i. mclachlan and g. r. w. quispel , j. phys a. * 39 * , 5251 ( 2006 ) .
|
an algorithm is derived for computer simulation of geodesics on the constant potential - energy hypersurface of a system of classical particles . first , a basic time - reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant potential energy constraint via standard lagrangian multipliers . the basic algorithm is tested by single - precision computer simulations of the lennard - jones liquid . excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision . nevertheless , just as for algorithms , stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to `` entropic drift '' of the potential energy towards higher values . a modification of the basic algorithm is introduced that ensures potential - energy and step - length conservation ; center - of - mass drift is also eliminated . analytical arguments confirmed by simulations demonstrate that the modified algorithm is absolutely stable . finally , simulations show that the algorithm and the standard leap - frog algorithm have identical radial distribution functions for the lennard - jones liquid .
|
the model is among the simplest systems to display self - organized criticality and avalanche behavior . starting from arbitrary initial conditions ,the model spontaneously reaches a state with long range correlations in space and time that are set up by a series of avalanche events .the approach to criticality , the critical state of the model , and the properties of avalanches in the model have been extensively studied during the past decade .there are now precise values of critical exponents and well - tested scaling laws . on the other hand ,there is not yet a theory of the critical state except for the case of mean field theory and near zero dimension . in this paperwe investigate the complexity of the histories generated by the model .these histories manifest long time correlations with early random events influencing distant , later events. however , the past may determine the future in ways which are more or less complex .it is the purpose of this paper to give a precise meaning to the notion of a complex history and then to show that histories produced by the model are not complex in this sense .the framework in which we will define the complexity of a history is computational complexity theory .computational complexity theory measures the computational resources needed to solve problems . among those resources are _ parallel time _ or equivalently _ depth _ , which measures the number of steps needed by a massively parallel computer to solve a problem .if the solution to a problem can be re - organized so that many computations are carried out simultaneously and independently then a great deal of computational work can be accomplished in few parallel steps . on the other hand, it may be that the sequential nature of the problem precludes reorganizing the work into a small number of parallel steps .the theory of parallel computational complexity provides algorithms for efficiently solving some problems in parallel and provides evidence supporting the proposition that solutions to other tasks can not be substantially accelerated by parallelization .a powerful feature of this theory is that it yields essentially the same results independent of the choice of the model of computation .thus , measures of parallel computational complexity reveal the logical structure of the problem at hand rather than the limitations of the particular model of parallel computation .we will define the complexity of histories generated by a stochastic model such as the model in terms of the parallel time required to generate a sample history . if , through parallelization , the dynamics of the model can be reorganized so that many processors running for a few steps can generate a statistically valid history then we will say the histories generated by the model are not complex .if on the other hand , there is little benefit from parallelization so that a history must be generated in a step by step fashion , then the model generates complex histories .the model illustrates the point that scale invariant spatial and temporal correlations are not sufficient to guarantee complexity in the sense defined here .indeed , it is the strictly hierarchical nature of the avalanches in the model that permits us to generate them in a small number of highly parallel steps .the present work fits into a series of studies of the computational complexity of models in statistical physics .although previous studies focused mainly on the computational complexity of generating spatial patterns , in many cases the analysis can also be applied to reveal the complexity of the history of the growth of the pattern .for example , two dynamical processes for diffusion limited aggregation ( dla ) have been shown to be p - complete .these results suggest that these processes for generating dla clusters can not be re - organized to greatly reduce the parallel time needed for the generation of an aggregate ( however , see [ ] for a parallel algorithm that achieves a modest speed - up for dla ) .the growth of an aggregate can be viewed as a history and the same arguments can be employed to give evidence that such histories can not be significantly parallelized .similarly , predicting the behavior of abelian sandpiles for has been shown to be p - complete .by contrast , the patterns generated by a number of growth models such as the eden model , invasion percolation and the restricted solid - on - solid model were shown to require little parallel time and these results carry over immediately to fast parallel algorithms for generating histories for these models . in sec .[ sec : comp ] we give an overview of selected aspects of computational complexity theory relevant to the present work and in sec .[ sec : bsmodel ] we review the model . in sec .[ sec : parbs ] we present parallel dynamics for the model and in sec .[ sec : simpar ] we present simulation results showing that histories can be efficiently simulated using parallel dynamics . in sec.[sec : bspc ] we show that the standard dynamics for the model and a new dynamics based on conditional probabilities both yield p - complete problems .the paper concludes in sec .[ sec : conc ] .computational complexity theory determines the scaling of computational resources needed to solve problems as a function of the size of the problem .introductions to the field can be found in refs .. computational complexity theory may be set up in several nearly equivalent ways depending on the choice of model of computation .here we focus on parallel computation and choose the standard _ parallel random access machine _ ( pram ) as the model of computation .the main resources of interest are _ parallel time _ or _depth _ and number of processors .a pram consists of a number of simple processors ( random access machines or rams ) all connected to a global memory .although a ram is typically defined with much less computational power than a real microprocessor such as pentium , it would not change the scaling found here to think of a pram as being composed of many microprocessors all connected to the same random access memory .the processors run synchronously and each processor runs the same program .processors have an integer label so that different processors follow different computational paths .since all processor access the same memory , provision must be made for how to handle conflicts .here we choose the _ concurrent read , concurrent write _( crcw ) pram model in which many processors may attempt to write to or read from the same memory cell at the same time .if there are conflicts between what is to be written , the processor with the smallest label succeeds in writing .the pram is an idealized and fully scalable model of computation .the number of processors and memory is allowed to increase _ polynomially _( i.e. as an arbitrary power ) in the size of the problem to be solved .communication is idealized in that it is assumed that any processor can communicate with any memory cell in a single time step .obviously , this assumption runs up against speed of light or hardware density limitations .nonetheless , parallel time on a pram quantifies a fundamental aspect of computation . during each parallel step, processor must carry out independent actions .communication between processor can only occur from one step to the next via reading and writing to memory .any problem that can be solved by a pram with processors in parallel time could also be solved by a single processor machine in a time that is no greater than since the single processor could sequentially run through the tasks that were originally assigned to the processors . on the other hand, it is not obvious whether the work of a single processor can be re - organized so that it can be accomplished in a substantially smaller number of steps by many processor working independently during each step .two examples will help illustrate this point .first , suppose the task is to add finite - precision numbers .this can be done by a single processor in a time that scales linearly in . on a pram with processorthis can be done in parallel time using a binary tree ( for simplicity , suppose is a power of ) .in the first step , processor one adds the first and second numbers and puts the result in memory , processor two adds the third and fourth numbers and puts the result in memory and so on .after the first step is concluded there are numbers to add and these are again summed in a pairwise fashion by processors . the summation is completed after steps .this method is rather general and applies to any associative binary operation .problems of this type have efficient parallel algorithms and can be solved in time that is a power of the logarithm of the problem size , here , that is , _ polylog _ time .a second example is the problem of evaluating the output of a boolean circuit with definite inputs .a boolean circuit is composed of and , or and not gates connected by wires .suppose the gates are arranged in levels so that gates in one level take their inputs only from gates of the previous level and give their outputs only to gates of the next level so there is no feedback in the circuit .the number of levels in the circuit is referred to as the _ depth _ of the circuit . at the top level of the circuit are the true or false inputs . at the bottom level of the circuit are one or more outputs . given some concisely encoded description of the circuit and its inputs , the problem of obtaining the outputs is known as the _ circuit value problem _ ( cvp ) .clearly , we can solve cvp on a pram in parallel time that is proportional to the depth of the circuit since we can evaluate a single level of the circuit in a single parallel time step . on the other ,there is no known general way of speeding up the evaluation of a boolean circuit to a time that is , say polylog in the depth of the circuit . for the case of adding numbers , the logical structure of the problem is sufficiently simple that we can substitute hardware for time to obtain the answer in polylog parallel steps . for cvp, hardware can not be used to substantially decrease the depth of the problem . at the present time , the statement that cvp can not be solved in polylog parallel time is not mathematically proved .instead , one can show that cvp is _p - complete_. to understand the meaning of p - completeness we must first introduce the complexity classes * p * and * nc * and the notion of _ reduction_. * p * consists of the class of problems that can be solved by a pram in polynomial time with polynomially many processors . *nc * consists of the class of problems that can be solved in polylog time on a pram with polynomially many processors .a problem is reduced to a problem if a pram with an oracle for can be used to solve in polylog time with polynomially many processors .an oracle for is able to supply the pram a solution to an instance of in a single time step .intuitively , if can be reduced to then is no harder to solve in parallel than .a problem in * p * is p - complete if all other problems in * p * can be reduced to it .it can be proved that cvp is p - complete . furthermore ,if it could be shown that there is any problem in * p * that is not in * nc * then it would follow that cvp is not in * nc * . however , the conjecture that , though widely believed , is not yet proved . since reductionsare transitive , showing that another problem is p - complete can be proved by reducing cvp to . later in the paperwe will take this approach to prove that two problems representing different dynamics are p - complete .in addition to supplying a p - complete problem , boolean circuits serve as an alternate model of computation . for a given computational problem , and for a given problem, there is a boolean circuit that solves the problem . in order to solve problems of any size, a family of boolean circuits is needed , one for each problem size .furthermore , to be comparable to a pram running a single program , the family of circuits should satisfy a _ uniformity _condition that bounds the complexity of the computation needed to specify each circuit in the family .the resources of circuit depth ( number of levels in the circuit ) and circuit width ( maximum number of gates in a level ) are roughly comparable to the pram resources of parallel time and number of processor , respectively .indeed the class * nc * can be defined as the set of problems that can be solved by uniform circuit families whose depth scales polylogarithmically in the problem size and whose width scales polynomially in the problem size .* p * is the class of problems solvable by circuits of polynomial width and depth .in this section we review the model and its behavior .a detailed discussion can be found in ref . .the model defines histories of configurations on a -dimensional lattice . at each lattice site and discrete time ,there is a number ] .the s on the remaining sites are unchanged .let be the extremal value at the extremal site at time .histories can be organized into _ avalanches _ or causally connected regions of activity in the -dimensional space - time .avalanches can be classified by bounding the extremal value during the avalanche .the collection of renewed sites from to is an -avalanche if and but for all with , .note that it is also possible to have an -avalanche whose duration is a single time step ( i. e. and no times ) if and .an -avalanche can be partitioned into a sequence of one or more -avalanches for . repeated use of this ideaallows us to partition avalanches into a nested sequence of subavalanches .the partitioning of avalanches into subavalanches is one of the key ideas in the parallel algorithm described below .the properties of avalanches are entirely independent of the environment in which they exist so long as the environment is large enough to contain the avalanche .indeed we can grow the _backbone _ of an avalanche without regard to the environment .the backbone of an -avalanche is the set of sites and the corresponding for which .( note that the last time step in an avalanche is not part of the backbone ) .the backbone of an -avalanche for is shown in fig .[ fig : f13 ] .the only information we need about the environment is that all environmental sites have random numbers uniformly distributed in the interval ] as is done in standard dynamics , one can declare a renewed site to be part of the environment with probability or , with probability , declare it part of the backbone .if the site is part of the backbone it is then given an explicit random number in the interval .the construction of an -avalanche backbone begins with the renewal of the origin and its neighbors and then continues until there are no remaining sites with number less than .this approach for making an avalanche backbone is called the branching process in [ ] and is an efficient method for carrying out large - scale simulations of the model .-avalanche for in the one - dimensional model .black dots represent positions where the random numbers are less than . immediately before and at the last time of the avalanche all random numbers are greater than . ]the model manifests self - organized criticality : independent of the initial state , after a long time the system approaches a stationary state where the average avalanche size diverges and the distribution of avalanche sizes follows a power law . in the critical state almost all s exceed a critical value , ( for the , ) .if is the duration of an avalanche then , in the critical state , the probability of an avalanche of duration decays as ( for ) .the average duration of -avalanches diverges as with for . the -avalanche size distribution, has the scaling form with , and related by the usual exponent relation .the scaling function goes to a constant as and decays rapidly to zero for .thus the upper cut - off in the -avalanche size distribution obeys the scaling law ( for ) .the spatial extent of -avalanches also diverges in the critical state .let be the number of sites covered by an -avalanche , then .the approach to the critical state is characterized by a _ , which is a stepwise increasing function of time and is the maximum value of for . times when the gap increases mark the end of an avalanche. if the gap first has the value at time then all s in the system are greater than or equal to at time and a -avalanche begins at time . on average, the gap approaches the critical value as a power law , where is the number of sites in the system . in the parallel dynamics for the model we will first identify the successive values of the gap and then fill in the intervals between them with the appropriate avalanches .in this section we present a parallel approach for constructing histories . at the core of this approachis a procedure for hierarchically building avalanches in which small avalanches are pasted together to yield larger avalanches .the large scale structure of the history is controlled by the sequence of gap values .each value of the gap determines an avalanche that needs to be constructed .the algorithm proceeds in three stages . in the first stage , described in sec .[ sec : gap ] , the gap sequence is obtained and in the second stage , described in sec . [ sec : parf ] , the required avalanches are constructed .finally , in the third stage , all space - time points that are not yet determined are explicitly given numbers .although we have not run the parallel algorithm on a parallel computer , we have measured essential aspects of its behavior using a conventional simulation and the results are reported in sec .[ sec : simpar ] .the key finding is that we can produce histories in a parallel time that is polylogarithmic in the length of the history . in this subsectionwe show how to construct a sequence of gap values , , for a system with lattice sites ( independent of dimension ) .suppose that at some time the gap increases from to .this event occurs at the last step of a -avalanche so , before the selection of the extremal value that increases the gap , the distribution on every site of the lattice is uniform on the interval ] and is the xor operation for real numbers .since is associative we can unroll eq.([eq : gap ] ) to obtain , where is the initial value of the gap and the are identically distributed independent random variables chosen from the same distribution as described above . using a standard parallel minimum program ,each can be generated on a probabilistic pram in parallel time .a standard parallel prefix algorithm can then be used to evaluate all values of the gap in parallel time . in this sectionwe show how to generate an -avalanche in parallel time that is polylog in .the idea of this algorithm is to generate in parallel a large number of independent small avalanches and then to paste groups of these together to create a smaller number of larger avalanches .these are in turn pasted together to create yet larger avalanches and so on in a hierarchical fashion .the algorithm is controlled by a pre - determined list of values , .these values specify the types of avalanches to be manufactured .the sequence of converges to as with a constant between to .this choice for implies that for large , so that the average number of -avalanches needed to construct an -avalanche is asymptotically constant ( and equal to ) .the length of the sequence and its penultimate element is chosen so that . since , -avalanches always have and consist of the origin and its nearest neighbors together with their associated values . in parallelwe can make , -avalanches in constant time using processors .we now construct -avalanches by pasting together one or more -avalanches .the origin of the first -avalanche is also the origin of the -avalanche .if the minimum of the s of the -avalanche is greater than then we are done and the -avalanche is also an -avalanche . if not , a second -avalanche is attached to the first -avalanche with its origin lined up with the extremal site of the first avalanche .again , we check whether the minimum among all the sites covered by the pair of -avalanches is greater than and , if so , we are done .the pattern for constructing -avalanches from an -avalanche should now be clear : -avalanches are pasted together until all the covered sites have . the process of pasting together -avalanches is a generalized version of the standard process .indeed , pasting together -avalanches is exactly the standard process .all that we need to know about each -avalanche is its origin and the final values of on the sites the avalanche covers . during the construction of an -avalanche ,when a new -avalanche is attached , its origin is lined up with the extremal site among all of the sites currently covered by the growing -avalanche .the values on the covered sites of the newly added -avalanche renew the sites covered by this avalanche and , if some of these sites were previously part of the environment , the total number of covered sites of the growing -avalanche is increased .if the covered sites all have , the -avalanche is finished .the pasting together of -avalanches must be carried out sequentially so that if the number of -avalanches needed to construct a single -avalanches is then it will take parallel steps to grow the avalanche .finally , once an -avalanche is constructed , it is straightforward to find its backbone by eliminating those sites for which . on average -avalanches must be pasted together to form one -avalanche , that is . however , the step in the parallel construction of an -avalanche is not complete until _ all _ the required -avalanches have been made .thus , the time required for this step will be set by the maximum value of . as approaches , and for fixed we need to make increasingly many -avalanches so that the tail of the distribution is explored . as we shall see in the next section , the expected value of the maximum of increases logarithmically in the number of -avalanches that are constructed . as a result , the average parallel time required to generate an -avalanche is predicted to be proportional to .the discussion thus far has posited an effectively infinite environment for the avalanche to grow in , however this is not required .the same hierarchical construction can just as well be used to make avalanches in periodic boundary conditions in a system of size .additional complications arise for other boundary conditions due to the breaking of translational symmetry and this situation will not be discussed here . finally , we are ready to describe how to construct a full history in a system with sites and periodic boundary conditions starting from the initial condition , .first , following the procedure of sec .[ sec : gap ] we construct a sequence of gap values , .then , using the parallel algorithm described above , for each gap value , , we construct the backbone of a -avalanche for all .the site where is chosen randomly from among all the sites of the lattice and this site serves as the spatial origin for the -avalanche .the starting time of the -avalanche is obtained by summing over the durations of the preceding avalanches , where is the duration of the -avalanche .successive avalanche backbones are concatenated to form the backbone of the history which includes all the avalanche backbones and the intervening extremal sites that have the gap values . thus faronly the backbone of the history is specified while sites in the environment are conditioned by the extremal values but not fully specified .a definite , statistically valid , history is constructed from the backbone by fixing all space - time points not in the backbone . at the last time , , the gap has the values .the randomly chosen extremal site , is part of the backbone and takes the value .all other sites at time are chosen from the uniform distribution on ] that site is part of the environment , takes a single value .there are two possibilities .the interval may come to end because is an extremal site where the gap increases from to that is , and . in this case , throughout the interval .the second possibility is that site is renewed at time and becomes part of the -avalanche backbone at that time . in this case, is chosen uniformly from the distribution ] controls the dynamics .at each time step , the extremal site is located .the extremal site and its neighbors are renewed using the next group of values .periodic boundary conditions are assumed .figure [ fig : standyn ] shows an example of standard dynamics .the numbers along the top row of the figure are and the numbers in the three columns on the right are the groups of s .successive states of the lattice are given in successive rows in the lower left section of the figure .each row is obtained by aligning the three values of the row under the minimum value of the previous row .are to the right of the vertical line . during each time step, the extremal site and its two neighbors are renewed by the triple of numbers to the right of the vertical line . ]p - completeness for a natural decision problem associated with standard dynamics is proved by a reduction from the circuit value problem ( cvp ) .the inputs of the standard bak sneppen decision problem are the initial site values , , the renewal numbers , , a specified site , time and bound .the problem is to determine whether . in order to effect the reduction from cvp we need to show how to build logical gates and wires that carry truth values .we will show how to build and gates , or gates and wires in the one - dimensional model .this set of gates allows us to build monotone circuits .since monotone cvp is p - complete only for non - planar circuits , our construction constitutes a p - completeness proof only after it is extended to models .after we present the basic construction for we will discuss how to extend the construction to and the prospects for proving p - completeness for . in order to embed a circuit in standard dynamics, most of the sites of the lattice will have the value 1 .a truth value is represented by a pair of neighboring sites and such that within the pair , one site has the value one and the other has a value less than one .if the left member of the pair is a one the pair represents true , otherwise it represents false .a truth value is activated when its smaller member is the extremal site of the lattice .once the truth value is activated , it can be moved by appropriate choice of the triple of s .the triple where is a suitable chosen number , will move a truth value to the right while the triple will move the truth value to the left . andgates are realized by transporting two truth values next to one another .suppose that initially , two truth values and are next to each other as shown in the fig .[ fig : stanand ] .the numbers , , , and are ordered and , at the time the gate is activated , is less than all the other numbers in the lattice .the top panel in fig .[ fig : stanand ] shows the situation where true and true .three time steps later , represented by the middle two sites of the lattice is true .the top panel in fig .[ fig : stanand ] shows the situation where true and false in which case false as desired . the case false and false is the same as true except the states in the part of the lattice representing the gate are shifted one lattice constant to the left so that false as desired .the final case false and true is left as an exercise for the reader . and are represented by the two pairs of sites above the horizontal line .the output is represented by the middle two sites along the bottom row to the left of the vertical line .the various numbers are ordered .the top pictures shows the situation with both inputs are true yielding an output of true .the bottom picture shows the situation where the left input is true and the right input is false yielding false . ] or gates are slightly more complicated than and gates .initially , the truth values are brought together until they are separated by one site as shown in fig .[ fig : stanor ] the numbers are ordered and , at the time the gate is activated , is less than all the other numbers in the lattice .the top panel in fig .[ fig : stanor ] shows the situation where true and true .six time steps later , , represented by the second and third columns to the left of the vertical line , is true .the situation and is seen to be correct by translational invariance .the case where false and true is shown in the bottom panel of fig .[ fig : stanor ] .verifying the case true and false is left as an exercise for the reader . and are represented by the two outer pairs of sites above the horizontal line . the output is represented by the second and third sites to the left of the vertical line .the various numbers are ordered .the top pictures shows the situation with both inputs true .the bottom picture shows the situation where the left input is false and the right input is true , also yielding true . ] we also need a gadget for fan - out .figure [ fig : stanfan ] shows how two copies , and can be obtained from .the figure shows the case where true .the case where false is seen to be correct by translational invariance .is represented by the middle pair of sites above the horizontal line .the outputs and are represented by the two pairs of sites to left of the vertical line .the numbers are ordered . ]overall timing of the circuits is controlled by the numbers representing truth values and used in gates . since , at each stage ,the smallest number on the lattice is selected , truth values represented by small numbers are activated first .a truth value that is active , meaning it is represented by a number that is the minimum number on the lattice , can be put into storage for a pre - determined period by choosing values with larger than all the values associated with logic variables that are active until is active again . in this wayall the required and values can be calculated with a fast parallel computation , in advance knowing only the activation time of each logic gate . and and or gates , fan - out , timing and wires ( transporting truth values ) are sufficient for constructing arbitrary monotone boolean circuits .since planar monotone circuits are not p - complete , the preceding construction does not show that the standard bak sneppen problem is p - complete .it is straightforward to construct all the gadgets and arrange timings within standard dynamics by simply padding the set of values with 1 s in the directions transverse to the planes in which the action of the gadgets takes place .since truth values can be transported in any direction , non - planar monotone circuits can be embedded in systems .the conclusion is that the standard bak sneppen problem is p - complete .thus there can be no exponential speed - up of standard dynamics through parallelism ( unless it should turn out that , a highly unlikely prospect ) .although the current proof does not yield a p - completeness result for the planar standard bak sneppen problem , we do not see a general reason preventing the construction of non - monotone circuits and we can not rule out the possibility that the planar problem is also p - complete . at least for ,the p - completeness result precludes the possibility of solving the standard bak sneppen problem in polylog time .nonetheless we can achieve a sublinear parallel time solution to the problem by invoking the hierarchical construction of avalanches from parallel dynamics .the history produced by standard dynamics can be viewed as a sequence of avalanches and these avalanches can be efficiently constructed from the list of values using the parallel methods of sec .[ sec : parf ] .the road block in achieving exponential speed - up for standard dynamics does not lie in the construction of avalanches , rather it is due to the presence of explicit initial values instead of initial probabilities defined by and made explicit only at the end of construction .the presence of explicit initial values means that the environment of each avalanche is specified and the sequence of gap values can not be determined in advance of making the avalanches . the accelerated algorithm for the standard bak sneppen problem proceeds as follows .the first gap value , is just the minimum among the initial values .the algorithm then calls a parallel avalanche subroutine to produce a -avalanche from the initial sequence of values .this avalanche ( the full avalanche , not just the backbone ) renews the sites that it covers while the rest of the environment is unaffected . at the ending time , of the -avalanche ,the extremal site is determined and its values determines the next gap value , .generally , suppose that the gap increases at times , then and the parallel avalanche subroutine is called to produce a -avalanche using the remaining list of values .the origin of this avalanche is the extremal site and the avalanche renews all the sites it covers . as the gap approaches the critical value , , the algorithm becomes increasingly efficient since larger and larger avalanches may be made in parallel .the parallel avalanche subroutine takes as inputs a gap value and the remaining values , values .its output is the -avalanche that would have resulted if standard dynamics were used with the same sequence of values .the procedure is similar to that described in sec.[sec : parf ] and uses the same hierarchy of -avalanches but care must be taken that the values are used in the correct order .each block of values is an -avalanche . in parallelwe must now group this sequence of -avalanches into -avalanches .the procedure for grouping avalanches is illustrated using fig.[fig : fseven ] .each node on the bottom level of the tree represents blocks of values . in parallelwe now independently start from each of these nodes and carry out standard dynamics until an -avalanche is complete .for example , starting from bottom level node 2 it takes two time steps to build an -avalanche denoted by the 3 on the level of the tree but it only takes one time step starting from bottom level node 3 since this -avalanche is also an -avalanche . the time that standard dynamics must be run to obtain each of the -avalanches defining the nodes on the -level is just the maximum in - degree of this level .thus far we have made -avalanches starting from each bottom level node , now we must determine which of these avalanches to keep to reproduce the result of standard dynamics acting on the sequence .the correct -avalanche to associate with each node in the -level is the one that includes all of the children of the node .this choice insures that all values are used .for example , the -avalanche labeled 3 includes steps 2 and 3 rather than 3 alone .having assembled the correct set of -avalanches we can now group these into -avalanches and so on .the new ingredient in going from the to -avalanches when is that the generalized version of the standard dynamics is used as discussed in sec .[ sec : parf ] .as before , -avalanches can be constructed in a time that is polylog in the , the duration of the avalanche .the speed - up obtained due to the parallel avalanche subroutine can be estimated by the scaling laws for the and . up to logarithmic factors, it takes constant parallel time to produce a -avalanche .combining eqs.[eq : gamma ] and [ eq : gapt ] we have where is parallel time and is sequential time . integrating the differential equation yields , so that power law speed - up is obtained .the important difference between parallel and standard dynamics is that in parallel dynamics , sites that are not part of avalanches are specified probabilistically until the last stage of the computation while , for standard dynamics , all sites are explicitly determined on each time step .this suggests the idea that an efficiently parallelizeable dynamics might result from minimizing the number of explicitly determined values until the last stage of the computation .this idea lead to the conditional dynamics described and analyzed in the next two sections . in standard dynamics with the usual choice of initial conditions ,lattice sites are assigned random numbers uniformly distributed on ] . although we have not yet assigned numbers to any other sites we now know that these sites are conditioned to be greater than .the next step in the standard process is to renew and its neighbors with numbers chosen on the interval ] while and its neighbors are randomly chosen on the interval ] .we have that for and all its neighbors while for all other sites .the extremal site at is a weighted random choice among the sites with site weighted by .the extremal value is chosen from the distribution of the minimum of numbers , where the number is chosen from the uniform distributions on ] to each space - time point . from these numbers ,compute .note that is uniformly distributed on ] .for each , these final values are then the correct values backwards in time through the latest time when site is renewed .suppose site is last renewed at time that is or is a neighbor of . if then and this value holds until the next earliest time that is renewed .if site is renewed at time because it is the neighbor of the extremal site then is chosen from the uniform distribution on $ ] and this value holds until the next earliest time that was renewed . working backwards in this waya definite history is constructed from the set of extremal values , extremal sites and conditions .it should be noted that the reconstruction of a definite history from the data generated by the conditional dynamics can be carried out in parallel in polylog time . during conditional dynamics , the values of the s become spatially non - uniform however at times when the gap increases , almost all the s are equalized .if increases at time then for all , except , of course , at the extremal site and its neighbors where the s are reset to zero .p - completeness for conditional dynamics with is proved by a reduction from the monotone circuit value problem . to carry out the reductionwe need a way to implement and and or gates and non - local fan - out .we will explain how the reduction works for the one - dimensional model .since non - local fan - out can be implemented , the p - completeness proof holds even for the one - dimensional problem .the inputs of the conditional bak sneppen decision problem are the initial conditions , , the random numbers , , a specified site , time and bound .the problem is to determine whether . in the reduction from cvp ,truth values reside at specified sites and are represented by the values of . if , the site has the value true at time and if the site has the value false .the number is chosen to be small enough that the total time duration to evaluate the circuit , still yields small values , thus we set .sites with truth values are separated by background sites where . if site has an initial truth value this is represented by an initial value , or 1 .sites that represent outputs of gates or fan - outs have .in addition to sites representing truth values , there is a single site called with values and .a site carrying a truth value can be _ read _ at time by setting .if site is true at time it is selected since whereas , if is false at time then and is selected at time .finally , for each and gate and each fan - out , we need two neighboring auxiliary sites and with and . first , we show how to fan - out a truth value at site a at time to a new site , a at time using the gadget shown in fig.[fig : fan ] . in fig.[fig : fan ] , white squares have , black squares have and gray squares have small but non - zero values .the gray squares in column have and the gray square in column at time has .the value of depends on the truth value of a. if a is true then but if a is false then . at time , site is selected .first suppose a is true .then , since and we have that site a is selected at time .the selection of site a means that and therefore , so that site is selected at time .the selection of at time insures that a is set to true for any time later than since a is refreshed at time and . in fig .[ fig : fan ] we show a read at time but this is not necessary , it can also read at any time after .now suppose a is false . then site is selected at time and , consequently , .thus site is selected at time and site a fails to be refreshed so that and a is false . .] the non - local fan - out just described can be easily extended to produce a non - local and gate as shown in fig .[ fig : and ] . as before , .the inputs a and b may be anywhere on the lattice , and the output a appears immediately to the right of the auxiliary sites and .suppose first that both a and b are true , then and so that is selected at time and a b is refreshed at time .thus the site a is correctly set to true at time . on the other hand ,suppose that a is false and b is true. then .thus site a fails to be refreshed so that a is correctly set to false .the other two possibilities for a and b are easily seen to work in the same way . , site a is set to a and b. ] the or gate is local in space but non - local in time and is shown in fig.[fig : or ] .initially site a is set to false however , at time , a is correctly set to a or b since if either a or b are true , then a is refreshed and can be selected at any later time to be the input to a fan - out or and gate .figure [ fig : or ] shows a and b read at time and however any time separation between the reading a and b is permitted , the only constraint on the or gate is that both a and b must be read before a is read ., site a is set to a or b. ] these gadgets are sufficient to reduce monotone cvp to the one - dimensional conditional bak sneppen problem and show that the latter is p - complete .note that the non - local character of the gadgets allows non - planar monotone cvp to be reduced to the one - dimensional problem , in contrast to the situation for standard dynamics where the p - completeness holds only for .our main result is that histories can be efficiently generated in parallel .specifically , simulations and analytic arguments suggest that a history of length can be simulated on a pram with polynomially many processors in average parallel time with the actual asymptotic behavior close to .the exponential speed - up achieved by parallelization is the result of re - organizing the history into a nested hierarchy of independent avalanches .the construction of a single -avalanche can be carried out in average parallel time which is exponentially less than the expected duration of the avalanche .how is it possible to create long range correlations in space and time very quickly in parallel ?first of all , it must be emphasized that the ground rules for parallel computation with a pram allow for non - local transmission of information in a single time step . to see how space - time correlations can be set up in a parallel time that is polylog in the correlation length or time , consider one step in the hierarchical construction of avalanches .for example , suppose two -avalanches are concatenated to yield a single -avalanche .the correlation length and time are thereby increased by constant factors in a fixed number of parallel steps .non - local transmission of information is needed to align the origin of the second -avalanche with the final extremal site of the first -avalanche and it is this alignment of the two -avalanches that increases the correlation length and time .the result is that spatio - temporal correlations grow exponentially in the number of parallel steps . in the parallel construction ,independent _ collection of -avalanches are concatenated to form an -avalanche .nonetheless , avalanches have a temporal structure and exhibit aging .aging is consistent with the independence of subavalanches for the following reason .the last extremal value in the construction of an -avalanche must exceed and this extremal value is more likely to come from the last -avalanche used in the construction .thus , extremal values and other properties of avalanches display aging .another example where critical correlations are set up much faster in parallel than might be expected are cluster monte carlo algorithms for critical spin systems .each cluster sweep can be accomplished in polylog parallel time on a pram with polynomially many processors using a parallel graph connectivity algorithm .the number of sweeps required to reach equilibrium scales as a small power ( typically much less than unity ) of the system size .thus long range critical correlations are set up in a parallel time that is much less than the correlation length .this is quite different than physically realistic local dynamics where critical correlations require a time that is at least quadratic in the correlation length .the intuition that histories are generated by an inherently sequentially process is not entirely wrong .in fact , standard dynamics was shown here to be associated with a p - complete decision problem for .standard dynamics takes as its inputs the initial values on the lattice and a sequence of -tuples of numbers that are used to renew the extremal site and its neighbors at each time step .the p - completeness result implies that it is almost certainly not possible to parallelize standard dynamics so that the resulting history is generated in polylog time .the key problem that prevents full parallelization of standard dynamics is that all site values are explicitly defined at every step in the construction .in contrast , parallel dynamics does not assign explicit values except in the avalanche backbones until the end of the calculation . for the model ,predicting the future starting from explicit initial conditions is harder than sampling a typical history .conditional dynamics is a third method for generating histories that has the smallest set of space - time sites explicitly defined until the end of the calculation .nonetheless , conditional dynamics is also associated with a p - complete decision problem and so can not be efficiently parallelized .parallel dynamics shares features in common with both conditional and standard dynamics and is intermediate in the degree to which it avoids explicit specification of site values until the end of the computation .the results presented here reflect characteristics of the model that are independent of the pram model of computation in which they were presented .the primary result of the paper can equivalently be stated in terms of families of boolean circuits .specifically , we can envision a hard - wired device composed of logical gates and random bit generators .the gates are wired in a feedforward direction so that , in generating a history , each gate is used only once .when the circuit is activated it produces a statistically correct history . for each system size and time need a different circuit . because of the equivalence of prams and circuit families, our result can be stated in terms of the logical depth of the circuit .logical depth is the longest sequence of gates between any of the random bit generators and the output representing the history .the existence of an efficient parallel algorithm implies that there is a uniform family of the circuits for generating histories whose depth scales polylogarithmically in , independent of .the number of gates in the circuit is bounded by a power of .the actual running time for any real circuit generating histories would , of course , be polynomial and not polylogarithmic in because of the need for wires connecting distant gates required for establishing the long range spatio - temporal correlations .a variety of non - equilibrium models in statistical physics have been employed to study the spontaneous emergence of complexity . besides the model ,other examples include diffusion limited aggregation ( dla ) , invasion percolation and sandpiles . besides their application to specific physical phenomena , these models have a broad appeal because they are governed by simple microscopic rules and yet they display self - organized criticality .it has been argued that they perhaps shed light on far more complex phenomena found , for example , in biology or economics .surely , however , biological and economic systems generate histories that have polynomial rather than polylogarithmic logical depth .perhaps one criterion for a model of the spontaneous emergence of complexity is that it should generate histories that require more than polylogarithmic depth to simulate .neither the model nor invasion percolation satisfy this requirement . on the other hand , both diffusion limited aggregation and the bak , tang and wiesenfeld ( btw ) sandpile model are related to p - complete problems which strongly suggests that the clusters or avalanches associated with these models can not be simulated in polylog depth .however , neither model generates histories much deeper than a power of the system size . in the case of dla, the cluster size can not exceed the system size so that the length of the history is bounded by where is the fractal dimension of the cluster .sandpile models are well - defined for arbitrarily long times .however , the btw model has an abelian property , which allows avalanches to be combined in any order .thus , it is possible to rearrange long histories so they are generated in polylog parallel time . from a given initial condition, it requires steps to relax the system .a sequence of avalanches due to randomly dropping sand on single sites can be organized into a binary tree using the same idea as adding numbers in parallel .the result is that a history of length can be simulated in parallel in time .by contrast , we find that histories can be simulated in parallel time independent of system size .this work was supported by nsf grants dmr-9978233 . we thank ray greenlaw , cris moore and stefan boettcher for useful discussions .
|
the parallel computational complexity of the evolution model is studied . it is shown that histories can be generated by a massively parallel computer in a time that is polylogarithmic in the length of the history . in this parallel dynamics , histories are built up via a nested hierarchy of avalanches . stated in another way , the main result is that the _ logical depth _ of producing a history is exponentially less than the length of the history . this finding is surprising because the self - organized critical state of the model has long range correlations in time and space that appear to imply that the dynamics is sequential and history dependent . the parallel dynamics for generating histories is contrasted to standard dynamics . standard dynamics and an alternate method for generating histories , conditional dynamics , are both shown to be related to p - complete natural decision problems implying that they can not be efficiently implemented in parallel .
|
let , with being an unknown parameter , be a family of density functions .sampling under selection bias involves observations being drawn not from directly , but rather from a distribution which is a biased version of , given by the density function where the is the weight function .we observe a sample , independently taken from .in particular , when the weight function is linear ; i.e. , the samples are known as length biased .there are many situations where weighted data arise ; for example , in survival analysis ( asgharian et al . , 2002 ) ; quality control problems for estimating fiber length distributions ( cox , 1969 ) ; models with clustered or over dispersed data ( efron , 1986 ) ; visibility bias in aerial data ; sampling from queues or telephone networks . forfurther examples of length biased sampling see , for example , patil and rao ( 1978 ) and patil ( 2002 ) . in the nonparametric setting replaced by the more general , so the likelihood function for data points becomes , a classical nonparametric maximum likelihood estimator ( npmle ) for ( the disribution function corresponding to ) exists for this problem and is discrete , with atoms located at the observed data points .in particular , vardi ( 1982 ) finds an explicit form for the estimator in the presence of two independent samples , one from and the other from the length biased density .our work focuses on length biased sampling and from the bayesian nonparametric setting we work in , the aim is to obtain a density estimator for .there has been no work done on this problem in the bayesian nonparametric framework due to the issue of the intractable likelihood function , particularly when is modeled nonparametrically using , for example , the mixture of dirichlet process ( mdp ) model ; see lo ( 1984 ) and escobar and west ( 1995 ) . while some ideas do exist on how to deal with intractable normalizing constants ; see murray et al .( 2006 ) ; tokdar , ( 2007 ) ; adams et al .( 2009 ) ; and walker , ( 2011 ) , these ideas fail here for two reasons : the infinite dimensional model and the unbounded when the space of observations is the positive reals .we by - pass the intractable normalizing constant by modeling nonparametrically .we argue that modeling or nonparametrically is providing the same flexibility to either ; i.e. modeling nonparametrically and defining is essentially equivalent to modeling nonparametrically and defining .we adopt the latter style , obtain samples from the predictive density of and then convert " these samples from into samples from , which forms the basis of the density estimator of .the layout of the paper is as follows : in section 2 we provide supporting theory for the model idea which avoids the need to deal with the intractable likelihood function .section 3 describes the model and the mcmc algorithm for estimating it and section 4 describes some numerical illustrations . in section 5are the concluding remarks and in section 6 asymptotic results are provided .our aim is to avoid computing the intractable normalizing constant .the strategy for that would be to model the density directly and then make inference about by exploiting the fact that in the parametric case if a family is known then so is , except its normalizing constant may not be tractable .there is a reluctance to avoid the problem of the normalizing constant in the parametric case by modeling the data directly with a tractable since the incorrect model would be employed .however , in the nonparametric setting it is not regarded as relevant whether one models or directly .a clear motivation to model directly is that this is where the data are coming from . for a general weight function , an essential condition to model through ( and denote the corresponding distribution functions of and , respectively ) is the finiteness of .this , through invertibility , enables us to reconstruct from and occurs when is absolutely continuous with respect to , with the radon - nikodym derivative being proportional to . for absolute continuity to hold we need that in the support of ie . in the length biased case examined here and the densities have support on the positive real line , so this condition is automatically satisfied . a case , for instance ,when this does not hold and invertibility fails is in a truncated model where , is a borel set and is a distribution which could be positive outside of . a bayesian model is thus constructed by assigning an appropriate nonparametric prior distribution to , provided that this in turn specifies a prior for .the question that now arises is how the posterior structures obtained after modelling directly can be converted to posterior structures from .the first step in this process would be to devise a method to convert a biased sample from a density to one from its debiased version .this algorithm is then incorporated to our model building process so that posterior inference becomes possible . specifically , assume that a sample , comes from a biased density .this can be converted into a sample from using a metropolis hastings algorithm .if we denote the current sample from as , then otherwise . here , we have the transition density for this process as where this transition density satisfies detailed balance with respect to since and thus the transition density has stationary density given by .this algorithm was first tested on a toy example , i.e. is ga so that is ga .a sample of of the was taken independently from the and the metropolis algorithm run to generate the , starting with .sample values for the sequence of yield which are compatible outcomes with the sample coming from . a similar example will be elaborated on in the numerical illustration section .applying this idea to our model would amount to turning a sample from the biased posterior predictive density to an unbiased one using a mh step .an outline of the inferential methodology is now described . 1 .once data from a biased distribution become avalaible a model for is assumed and a nonparametric prior is assigned .2 . using mcmc methods , after a sensible burn - in period , at each iteration , posterior values of the random measure and the relevant parameters are obtained .subsequently , conditionally on those values , a sequence , from the posterior predictive density is generated .3 . the will then form a sequence of proposal values of a metropolis - hastings chain with stationary density the debiased version of the posterior predictive , i.e. . specifically , at the -th iteration of the algorithm applying a rejection criterion a value is generated such that with probability , otherwise .these values form a sample from the posterior predictive of .we want the model for to have large support and the standard bayesian nonparametric idea for achieving this is based on infinite mixture models ( lo , 1984 ) of the type where is a discrete probability measure and is a density on for all .since we require to be such that or , equivalently , for a kernel we find it most appropriate to take the kernel to be a log normal distribution .so , assuming a constant precision parameter for each component , we have where is a discrete random probability measure defined in and , where denotes the dirichlet process ( ferguson , 1973 ) with precision parameter and base measure . interpreting the parameters, we have that , and for appropriate sets .this dirichlet process mixture model implies the hierarchical model for : for to complete the model we choose ga and for the base measure , is .a useful representation of the dirichlet process , introduced by sethuraman and tiwari ( 1982 ) and sethuraman ( 1994 ) , is the stick breaking constructive representation given by where the are i.i.d . from ,i.e. . the are constructed via a stick breaking process ; so that and , for , where the are i.i.d . from the beta distribution , for some , and almost surely .let and ; then we can then write this is a standard bayesian nonparametric model .the mcmc algorithm is implemented using latent variable techniques , despite the infinite dimensional model .the basis of this sampler is in walker ( 2007 ) and kalli et al .( 2011 ) . for introduce latent variables which make the sum finite .the augmented density then becomes , this has a finite representation and denotes the almost surely finite slice set .now we introduce latent variables which allocate the component that are sampled from . conditionally on the weights these are sampled independently with .hence , we consider the augmented random density therefore , the complete data likelihood based on a sample of size is seen to be this will form the basis of our gibbs sampler . at each iteration we sample from the associated full conditional densities of the following variables : where is a random variable , such that , and almost surely .these distributions are , by now , quite standard so we proceed directly to the last two steps of the algorithm .the upshot is that after a sensible burn in time period given the current selection of parameters , at each iteration , we can sample values from the posterior predictive density and subsequently , using a metropolis step , draw a value from its debiased version .* once stationarity is reached then at each iteration we have points generated by the posterior measure of the variables . these points are represented by given a value is generated .this is done by sampling a uniformly in the unit interval and then take if or if the appropriate is then assigned , with probability . even though we have not sampled all the weights , if we run out " of weights , in essence the indices\{1, ,n } , we merely draw a from . finally , the predictive value comes from . *the metropolis step for the posterior predictive of : let be the state of the chain from the previous gibbs iteration . accept the sample , from the -predictive , as coming from the -predictive , that is , with probability ; otherwise the chain remains in its current state i.e. .we illustrate the model with two simulated data sets and a real data example . in each of the assumed models , for a given realisation , we report on the results and compare them with the following density estimators : * the classical kernel density estimate given by * the kernel density estimate for indirect data , see jones ( 1991 ) , is given by where is the harmonic mean of .here is the bandwidth and in all cases an estimate of it has been calculated as the average of the plug in and solve the equation versions of it , ( sheather and jones ) .the gibbs sampler iterates times with a burn in period of . herewe use non informative prior specifications : the value of the concentration parameter has been set to . * example 1 . * the length biased distribution is and we simulate of size .the following results are presented figure 1 : * 1(a ) : ( i ) a histogram of the simulated length biased data set , ii ) the true biased density ga ( the solid line ) and iii ) the kernel density estimate ( the dashed line ) . *1(b ) : ( i ) a histogram of a sample from the posterior predictive density , ( ii ) the true biased density ga ( the solid line ) and iii ) the kernel density estimate ( the dashed line ) . *1(c ) : ( i ) a histogram of the debiased data associated with the application of the metropolis step , ii ) the true debiased density ( the solid line ) and iii ) jones kernel density estimate ( the dashed line ) . for both estimators and the bandwidth parameter is set at .the average number of clusters is . as it can be seen from the graphs we are hitting the right distributions with the metropolis step .* example 2 . * here the length biased distribution is the mixture we simulate a sample of size . similar results , as in the first example , are shown in figure , ( a)(c ) . for both estimators and the bandwidth parameter has been calculated to . for the average number of clusters, we estimate . it is noted that the metropolis sampler produces samples that are very close to the debiased mixture depicted with a solid line in (c ) .the data can be found in muttlak and mcdonald ( 1990 ) and consist of , , measurements representing the widths of shrubs obtained by line transect sampling . in this sampling methodthe probability of inclusion in the sample is proportional to the width of the shrub making it a case of length biased sampling .a noninformative estimation is shown in figure ( a)-(c ) with the same specifications as in ( [ noninformative ] ) while in (d ) , 3(e ) we perform a highly informative estimation with .the following results are presented in figures and : * (a ) , (b ) : histograms of the length biased data set and of a sample from the posterior predictive , respectively . in both subfiguresthe associated classical estimator is depicted with a dashed line , for .* (c ) : a histogram of the debiased data associated with the metropolis chain estimator .jones estimator is shown in dashed line , for the same bandwidth value .* (d ) , (e ) : histograms of the posterior predictive and the metropolis sample , respectively , under the highly informative prior , with superimposed classical density estimators . * (a ) : the running acceptance rate of the metropolis with jump distribution the posterior predictive values from with an estimated value of about .* (b ) , (c ) : running averages of the predictive and metropolis samples respectively .finally , in figure 5 we provide the autocorrelation function as a function of lag , among the values of the posterior predictive sample for the synthetic and real data sets , after a reasonable burn - in period .* estimation for the simulated data is nearly perfect and we get the best results for . as it is evident from subfigure (c ) , for the , the estimator does not properly capture the distributional features near the origin .the same holds true for the debiased mixture density , subfigure (c ) .* for the real data set the prior gives again the best results .such a prior gives the largest average number of clusters among all noninformative specifications that were examined .the debiased density is close to though not exactly the same .the difference comes from a small area where the biased data have the group of observations that causes to produce an intense second mode . excluding these data pointsjones estimator becomes identical with ours .* the highly informative specification increases the average number of clusters from ( noninformative estimation ) to about , thus the appearance of a second mode between and , in (d ) . from our numerical experimentsit seems that is `` data hunting '' in the sense that it overestimates data sets and produces spurious modes .our method performs better as it does not tend to overestimate , and at the same time has better properties near the origin .* when informative prior specifications are used they increase the average number of realized clusters and the nonparametric estimates tend to look more like jones type estimates .for example choices of priors like ga with increase considerably the average number of clusters and our real data estimates in subfigures (d ) and 3(e ) become nearly identical to .in this paper we have described a novel approach to the bayesian nonparametric modeling of a length bias sampling model .we directly tackle the length bias sampling distribution , from where the data arise , and this technique avoids the impossible situation of the normalizing constant if one decides to model the density of interest directly .this is legitimate modeling since only mild assumptions are made on both densities , so we are free to model directly and choose an appropriate kernel with the only condition that . in a parametric set - up since is known up to a parameter modeling directly is not recommended , since to avoid a normalizing constant problem a model for would not result from the correct family for .we have also as part of the solution presented a metropolis step to turn " the samples from into samples from .a rejection sampler here would not work as the is unbounded .the method we have proposed here should also be applicable to an arbitrary weight function , whereby samples are obtained from and yet interest focuses on the density function , where the connection is provided by our estimator , besides being the first bayesian kernel density estimator for length biased data , it was demonstrated that it performs at least as well and in some cases even better than its frequentist counterpart .in this section we assume that the posterior predictive sequence is consistent in the sense that a.s . as , where is the true density function generating the data and denotes the distance .this would be a standard result in bayesian nonparametric consistency involving mixture of dirichlet process models : see , for example , lijoi et al .( 2005 ) , where sufficient conditions for the consistency are given .the following theorem establishes a similar consistency result for the debiased density .* theorem . *let and denote the sequence of posterior predictive estimates for the debiased density and the true debiased density , respectively .then , a.s .* let where is the posterior expectation of , and for some it is that the assumption of consistency also implies that converges weakly to with probability one .this means for any continuous and bounded function of we have the a.s .weak consistency of implies and note that we now aim to show that these results imply the a.s . convergence of to .to this end , if we construct the prior so that for some constants and it is that and , assuming puts all the mass on \times ( \underline{\sigma}^2,\overline{\sigma}^{2})$ ] , then from the definition of weak convergence we have that , with probability one , also , with the conditions on , we have is a bounded and continuous function of for all .hence pointwise for all .consequently , by scheff s theorem , we have now and so as required .adams , r. p. , murray , i. and mackay , d.j.c . the gaussian process density sampler . _ advances in neural information processing systems ( nips ) _ * 21*(2009 ) .lijoi , a. , pruenster , i. and walker , s.g .( 2005 ) . on consistency of nonparametric normal mixtures for bayesian density estimation ._ journal of the american statistical association _ * 100 * , 12921296(2005 ) .
|
a density estimation method in a bayesian nonparametric framework is presented when recorded data are not coming directly from the distribution of interest , but from a length biased version . from a bayesian perspective , efforts to computationally evaluate posterior quantities conditionally on length biased data were hindered by the inability to circumvent the problem of a normalizing constant . in this paper we present a novel bayesian nonparametric approach to the length bias sampling problem which circumvents the issue of the normalizing constant . numerical illustrations as well as a real data example are presented and the estimator is compared against its frequentist counterpart , the kernel density estimator for indirect data of jones ( 1991 ) . _ keywords : _ bayesian nonparametric inference ; length biased sampling ; metropolis algorithm . * bayesian nonparametric density estimation under length bias * + + department of mathematics , university of the aegean , karlovassi , samos , gr-832 00 , greece . department of economics , national and kapodistrian university of athens , athens , gr-105 59 , greece . of mathematics , university of texas at austin , austin , texas 7812 , usa .
|
wind energy conversion systems is the fastest - growing source of new electric generation in the world and it is expected to remain so for some time . in order to be more reliable and competitive than classical power generation systems and due to geographical location of wind turbines , it is important to prevent failure and to reduce maintenance cost .hence , it becomes important task to categorize the the failed turbines and take necessary actions to prevent further problems in the due process .traditionally various analysis techniques such as fft , wavelet transformation , time - scale decompositions and am / fm demodulation , have been used to classify the failed turbines from the healthy ones . however , most of the mentioned analysis techniques uses predefined parameters for the analysis and in some cases fail to detect major failures . yet , recently developed adaptive data analysis techniques such as hht and permutation entropy shows a promising future in the field of advanced fault detection .conceptually simple and easily calculated measure of permutation entropy proposed by bendt et al. can be effectively used to detect qualitative and quantitative dynamical changes . in other wordsit serves as a complexity measure for time series data , considering the local topological structure of the time series , it assigns a complexity value to each data point . due to its simplicity and robustnessit has gained a remarkable popularity in the field of biomedical sciences .however the recent advances in its applicability in fault detection in rotary machines , bearing fault diagnosis has prompted curiosity in further application of this methodology in advanced fault detection mechanisms . in this articlewe make a simple demonstration of this powerful technique for characterising the wind turbines based on their complexity value of their current waveforms measurement .sample data of length 4 sec consisting of waveforms of , , phase currents with respect to the ground was collected from two wind turbines named as turbine and turbine .table[table ] represents the nomenclature of the current phases for the given data . in the first step of our analysisthe fft for each current waveforms was calculated , compared and analysed to detect if the failure could be due to some problems related to the bearings . in the next step ,the data set was analysed using permutation entropy values of the currents to compare the complexity based on the local topological variation of both turbines ..defining , and [ cols="<,<,<,<,<",options="header " , ] fast fourier transform for phase currents , and was calculated and corresponding six major amplitude and frequency spectrum were noted down . the amplitude and frequency value for each phase was compared for both turbines and .furthermore the basic purpose of the analysis was to detect any major low frequency component which are usually the result of bearing failure and whether there is a measurement difference in both turbines frequency spectra of current waveforms .this difference , if exists could be an indicator of possible failures .the permutation entropy value of , and over time with the parameter values : sequence length ( )=400,time delay()=1 and embedding dimension()=3 was calculated for each phase of the given turbines and . for more details about the permutation entropy calculation and parameter selection refer to the internal report submitted at department of engineering cybernetics by balchen scholarchip grantee sumit kumar ram .as it can be observed from fig.[fft_all ] , all the phases shows normal behavior in terms of their frequency and amplitude spectra of the currents .the phases do not show any low frequency component other than 42hz .due to low amplitude value higher frequencies are very difficult to be detected from the fft analysis which may have caused due to malfunction in the working condition of the turbine as we concluded but fft does not provide us useful information regarding this . next step of of the analysis was carried out using permutation entropy analysis to investigate the minute changes in the time series data which can be compared to conclude the working condition of the turbine .the given data set is of very short duration , so fft is unable to detect the low frequency component even if it has a higher amplitude .hence fft can not detect bearing fault from the measured current waveforms at the current point , even if it is present and has a very low frequency value . , , current waveforms of turbine 14 and 3 ] the permutation entropy value for each current waveforms was calculated for both turbines and was compared for analysis .it was found that for most of the current waveforms of both turbines the permutation entropy values are comparable . * but , there exists some waveforms for which the permutation entropy value does not match and the waveforms for one turbine has a comparatively higher permutation entropy value with respect to the corresponding waveforms of the other turbine . *[ l1_both ] and fig .[ l3_both ] illustrates the above statement taking into account the permutation entropy values of both turbines and for and waveforms .it can be seen that for the same waveforms ( ) of both turbines and , the permutation entropy value can be comparable , meaning , they have almost same average and standard deviation through out the time frame . where as the permutation entropy value for waveforms of both turbines in fig .[ l3_both ] has different values for each turbine and are not comparable with each other . fig .[ diff_both ] shows the difference of permutation entropy values for both and waveforms from both the turbines .it is clear from the figure that the difference of permutation entropy values for the waveforms has comparatively lower value than waveforms which can be infered as the indication of working condition of the turbines being different .min waveforms of both turbine 3 and 14 . ]min waveforms of both turbine 3 and 14 . ] and waveforms for turbines 3 and 14 . ]permutation entropy serves as a parameter to classify the turbines based on their complexity value of their current waveforms .this can act as a classifier and can be coupled with machine learning methodology along with other statistical analysis to develop an algorithm which can detect the working condition of the turbine and give information about the potential hidden failures .the entropy measurement is robust and is computationally affordable for application in real time .the authors would like to acknowledge the companies kongsberg maritime and sintef for providing raw data , as well as the department of engineering cybernetics , ntnu for the financial support through jans balchen scholarship , without which the research work would not have been possible .amirat , yassine , vincent choqueuse , and mohamed benbouzid .`` eemd - based wind turbine bearing failure detection using the generator stator current homopolar component . '' mechanical systems and signal processing 41.1 ( 2013 ) : 667 - 678. huang , norden e. , et al .`` the empirical mode decomposition and the hilbert spectrum for nonlinear and non - stationary time series analysis . '' proceedings of the royal society of london a : mathematical , physical and engineering sciences .vol . 454 .1971 . the royal society , 1998 .ram , sumit k. , kulia , geir , molinas , marta .`` analysis of healthy and failed wind turbine electrical data : apermutation entropy approach . ''department of engineering cybernetics , internal publications , ntnu .jan 2016 .nicolaou , nicoletta , and julius georgiou .`` detection of epileptic electroencephalogram based on permutation entropy and support vector machines . '' expert systems with applications 39.1 ( 2012 ) : 202 - 209 .yan , ruqiang , yongbin liu , and robert x. gao .`` permutation entropy : a nonlinear statistical measure for status characterization of rotary machines . '' mechanical systems and signal processing 29 ( 2012 ) : 474 - 484 .
|
this article presents the applicability of permutation entropy based complexity measure of a time series for detection of fault in wind turbines . a set of electrical data from one faulty and one healthy wind turbine were analysed using traditional fast fourier analysis in addition to permutation entropy analysis to compare the complexity index of phase currents of the two turbines over time . the 4 seconds length data set did not reveal any low frequency in the spectra of currents , neither did they show any meaningful differences of spectrum between the two turbine currents . permutation entropy analysis of the current waveforms of same phases for the two turbines are found to have different complexity values over time , one of them being clearly higher than the other . the work of yan et . al . in has found that higher entropy values related to the presence of failure in rotary machines in his study . following this track , further efforts will be put into relating the entropy difference found in our study to possible presence of failure in one of the wind energy conversion systems . skr12ms039.ac.in1,2 kulia.ntnu.no3 marta.molinas.no1
|
nowadays , global positioning system ( gps ) chips are ubiquitous , and continue to be embedded in a variety of devices .a gps device allows to determine its location with about meters accuracy , by measuring the propagation delay of signals transmitted by the set of gps satellites in the field of view ( fov ) of any receiver located on the surface of the earth , which typically requires measurements from at least four satellites .conventionally , the signal that arrives at the receiver is downconverted , match - filtered and oversampled at a fast rate .subsequently , the receiver acquires enough ( at least four ) strong signals by exploiting the orthogonality of the distinct _ coarse / acquisition _ ( c / a ) codes used in gps signaling at each satellite .however , due to the unknown propagation delays , the samples obtained are misaligned in time and frequency and therefore , it is vital to pinpoint the code - phase in order to decode the navigation data correctly and use the time - delay information for pseudo - range computation. furthermore , each of the satellites contributes a component of the received gps signal that is characterized by a distinct doppler offset , due to the unequal relative velocity of satellite and receiver , as well as the offset of the different local oscillators at the gps receivers . in general , time - frequency synchronization as well as signal detection is tackled in gps receivers during the _ acquisition / detection _ stage via a parallel search over the binned delay - doppler space across all the satellite c / a codes . in many practical scenarios ,signals might arrive at the receiver with multipath components instead of the line of sight ( los ) component .constructive and destructive superposition of randomly delayed and faded replicas , leads to distorted correlation peaks .this is usually tackled in the _ tracking stage _ that follows the _ acquisition / detection _ stage , by using an _ early - late _ receiver .such a receiver compares the energy of a symbol period in the first half from the early gate to the energy in the last half from the late gate so that the receiver can synchronize the signals accordingly .furthermore , many approaches , in addition to the early - late structure , have been proposed to better mitigate the effects brought by multipath , including ( but not limited to ) the narrow correlator , multipath eliminating technique ( met ) , and multipath estimating delay lock loop ( medll ) .these methods differ in their capabilities to remove multipath errors , specifically at low signal - to - noise ratio ( snr ) and/or in the presence of interference . in this work, we consider the general signal model that considers multipath effects and propose an acquisition scheme that coarsely captures significant paths for each active satellite , with its corresponding code - phase and doppler .the tracking stage that further resolves the estimates of delay - doppler pairs as well as the multipath components is beyond the scope of this paper .as described above , the acquisition and detection of gps signals is usually performed sequentially .first , the strongest signals coming from the satellites are detected by searching a binned delay - doppler space via exhaustive correlations that pinpoint the correct coarse timing information and frequency offsets .after acquisition and detection , the signal is locked and the device enters the tracking stage that tackles fine synchronization and multipath error mitigation in order to despread , demodulate and decode the navigation data correctly in real - time .however , this acquisition / detection scheme can be computationally intensive due to the large number of correlations , and especially the exhaustive search for peaks over the binned delay - doppler space across all the satellite signals with distinct c / a codes .for example , the maximum doppler shift in a gps signal is typically within ] is the navigation data sent by the satellite with a symbol period of , containing its time stamp , orbit location and relevant information entailed for positioning the receiver .more specifically , the waveform is determined by the satellite s c / a code \} ] is a pseudo - random binary sequence of length that contains maximum length sequence ( mls ) or gold sequence of length transmitted with a chip period , which implies .in fact , by the gps transmission standards we have for the gps -c / a signal , i.e. . the correlation properties of the spreading code are vital in the recovery of spread spectrum signals .denote the cross - correlation between different c / a code as \triangleq \frac{1}{m}\sum_{m=0}^{m-1}s_{i'}[m - u]s_i^\ast[m].\end{aligned}\ ] ] when is large , the gold sequences or mls sequences are orthogonal between different satellites and approximately orthogonal between different shifts .this is indicated by the flat and -periodic _ cross spectral density _^{-\mathrm{i } u \omega t_c } = \delta[i'-i]+\epsilon_{i',i}(\omega),\end{aligned}\ ] ] where the error function is also -periodic with due to the periodicity . ]this flat property plays an essential role in simplifying the design presented later in this paper .after downconversion , the signal at the receiver can be modeled as where are the multipath channel taps with delays and doppler shifts from the satellite to the receiver , and is the additive white gaussian noise ( awgn ) with variance .combined with the signal model ( [ sig_model ] ) , the signal is represented as \phi_i(t - nt-\tau_{i , r})e^{\mathrm{i } \omega_{i , r}t}+v(t),\nonumber\end{aligned}\ ] ] where \triangleq h_{i , r}d_i[n] ] , among which the strongest set of satellites ( ) are picked for the purpose of triangulation - .note that the sequence \}_{n\in\mathbb{z}} ] sparse due to the wide difference in signal strength .conventionally , the acquisition and detection of strong satellite signals is achieved by correlating the incoming signal with a bank of match - filters s that are separately modulated by carriers and shifted in time . in this way, the paths corresponding to peaks in the magnitude of ] . from ( [ discretized_model ] ), we can express as where we defined ^{-\mathrm{i}n(\omega - k_{i , r}\delta\omega)t} ] .the derivation is identical to the development in and is therefore omitted .here ^h ] and is an matrix with ] is the filtered noise by matched filters ( generators ) and therefore has a cross - spectral density matrix , where is the gram matrix of all the generators defined by {(i',k',q'),(i , k , q ) } & = \frac{1}{t}\sum_{\ell=0}^{lm-1}\phi_{i',k',q'}^\ast\left(\omega-\frac{2\pi \ell}{t}\right)\phi_{i , k , q}\left(\omega-\frac{2\pi \ell}{t}\right).\end{aligned}\ ] ] exploiting the specific choice of sampling kernels and the structure of and , we can further analyze the output samples as stated below .[ theorem1 ] suppose that the following conditions hold , * the pulse shaping filter has a spectrum \mathrm{rect}_{2\pi l / t_c}(\omega) ] with error ; * the frequency search step size is chosen as and . if the error functions satisfy and for any , then the gram matrix of all the generators satisfies where is bounded perturbation matrix satisfying {(i',k',q'),(i , k , q)}\|=\mathcal{o}(1)\ll lm ] .the output samples =\left[\cdots , z_{i , k , q}[n],\cdots\right]^t ] is the time - domain filtered noise sample and is some bounded perturbation error with being the processing gain on the signal - to - noise ratio .see appendix a. note that the frequency step size corroborates the fact that for standard commercial gps systems , the step size is usually rads / s which fits the analysis here by choosing . also , we can see that the output ] and corrupted by noise .assuming large enough processing gain and small enough noise , the delay - doppler pairs and can be found by the location of the peaks / dominant entries in ] so that a subset of the satellite signals are locked and passed onto the tracking stage for finer extraction .if we ignore the noise for a moment , then ] , it can be seen that only few of the dominant entries are useful .our goal is to exploit the underlying sparsity in the signal model to design an acquisition scheme that requires far fewer correlators . instead of tackling the problem from a match - filtering viewpoint as in standard gps , we look at the problem from an analog cs perspective , which is one of the main contributions of this paper .the analog cs design outlined in requires a small number of samplers ( only twice the sparsity in a noiseless setting ) , and hence gives rise to substantial practical savings as analyzed later in section 7 .however , the solution is given in the frequency domain and in general does not admit a tractable form in time domain , which makes it hard to implement in practice .another contribution of this work lies in further exploiting the structure of gps signals so that the sampling kernels are easy to implement .the outputs from the compressive samplers can then be used to solve the sparse recovery problem of locating the dominant / peak values reflected in the vector ] obtained from a large number of correlators . in order to reduce the number of correlators while retaining the ability to correctly identify the peaks of ] at the samplers outputs and recover that sparse structure instead , by employing analog cs techniques . the signal model in ( [ discretized_model ] ) does not reflect any sparse structure , since it is expressed by a set of deterministic generators s defined by unknown parameters and . the sparsity we exploit is the sparsity of delay - doppler pairs pinpointed by the peaks / dominant entries in ] is identical to that of ] that correspond to the original coefficients \}_{r=1,\cdots , r} ] be , then the support contains the code - phase and doppler information for acquisition , with a sparsity of .the aim of analog cs is to exploit this sparsity in acquiring using fewer correlators . as the scheme of uses a set of compressive samplers , to obtain minimal measurements ,from which the sparse vector ] , if s are chosen properly . for noisy scenarios , the necessary number of channels is larger than the minimum , and evaluated numerically ; in any case , it is much smaller than that required by the standard scheme , as we will demonstrate in section 6 .this reduction is obtained by appropriately choosing a set of randomized correlators ^t ] is a length- vector containing the fourier transforms of the generators . with this choice of , it can be shown that . since is independent of frequency , transforming into the time domain ,the samples can be written as = \mathbf{b}{\boldsymbol{y}}[n]+\mathbf{w}[n],\quad n\in\mathbb{z}.\end{aligned}\ ] ] the vectors \} ], we can convert to a finite mmv problem using the continuous - to - finite ( ctf ) technique developed in .specifically , we first find a basis for the range space of \} ] , can be obtained by solving , where is the sparsest matrix satisfying the measurement equation .this problem can be treated using various mmv sparse recovery techniques . in our simulations, we use the rembo algorithm developed in .finally the support of ] is recovered , the acquisition of correct delay - doppler pair is automatically achieved by locating the dominants / peaks in the vector ] , while the standard gps scheme can either employ information for a single measurement ] . for the proposed compressive acquisition scheme ,if a single vector measurement is used to recover the sparse vector ] , the number of acquisition channels as well as the accuracy of the acquisition in comparison with the standard gps scheme .the method proposed in depends on the ability of physically implementing the sampling kernels in ( [ cs_kernel ] ) .therefore , we explore the structure of the matrix to provide practical insights on the design of such filters .suppose that the conditions - and the requirement on the error functions in theorem 1 hold . then the sampling kernels can then be chosen as the randomized correlators from we have the general solution of the compressive samplers according to the result in theorem 1 , using taylor expansion on the matrix inverse and ignoring high order terms scaled by , we can approximate the inverse by where the last approximation comes from the fact that contains negligible elements . therefore , the compressive samplers can be chosen directly as , which leads to the time - domain expression in the corrolary .the the filter responses of can be precomputed , and these channel outputs are sampled every to produce the test statistics that are going to be used in lieu of the coefficients ] is a sensing matrix that satisfies certain coherence properties such as rip . +( 2 ) apply the set of compressive sampling kernels and arrive at measurements & = \mathbf{b}{\boldsymbol{y}}[n ] + \mathbf{w}[n ] , \quad n\in\mathbb{z}.\end{aligned}\ ] ] ( 3 ) solve the jointly sparse recovery problem as in to recover the support of ] is available , the delay - doppler pairs are determined by the support and as in ( [ sparse_model ] ) .+ * remark * : note that although the samples are taken at , the physical implementation of the compressive multichannel filtering operation is likely to require digital processing at the chip rate .nevertheless , it is possible that a wise choice of the coefficients of the matrix can further help reduce computations while maintaining the identifiability of the parameters .analysis of this approach goes beyond our current scope .what we can certainly claim is that the number of computations is now controlled by the parameter , rather than by the number of possible generators that span all possible delays and dopplers .in fact , the sampling kernels are precomputed and used online .this is likely to reduce cost of computation , access to memory and storage . the performance of the compressive multichannel sensing structure degrades gracefully as decreases , giving designers degrees of freedom to choose a desirable operating pointin this section , we run numerical simulations to demonstrate the proposed cs acquisition scheme in gps receivers . in the simulation , out of satellites asynchronously transmit c / a signals that are received by the gps devices , where the codes are length- gold sequences with and .a total of navigation data bits are sent at the rate of ( i.e. , ).the transmit filter is modeled by a finite length pulse shaping filter when , and otherwise .the length is sufficiently large so that the response of the pulse in the frequency domain remains approximately flat , i.e. . to reduce the simulation overhead without incurring a loss of generality , we assume that our statistical model for the channel consists of uniformly distributed delays , that are bounded by a maximum delay spread of ; and of doppler shifts , that are uniformly distributed , over a frequency range delimited by .the channel gains are , with a multi - path propagation having paths per satellite . in order to identify fractional delays with a half - chip accuracy , the functions are chosen with a half - chip spacing such that the resolution of is achieved , and with a frequency resolution of that corresponds to steps around when .it follows that and . for simulation purpose , the sensing matrix generated as a random binary matrix ( while in practice it can be chosen as a deterministic binary matrix to simplify the implementation of correlators ) .using multiple measurements ,\mathbf{c}[2],\ldots,\mathbf{c}[50]\} ] for cs receiver v.s . the mf receiver for ( above ) , and for ( below),width=240,height=288 ] ) and , compared against the the mf receiver also processing measures , width=240,height=288 ] ) and , compared against the the mf receiver also processing measures , width=240,height=288 ] in all simulations , the attenuated components with distinct delays from each of the satellites are acquired by a number of channels , in contrast to the traditional .the performance is illustrated in terms of success rate and average root mean square error ( rmse ) , respectively , in fig .[ fig1:sm_offtg_uid ] and fig . [ fig2:mm_offtg_uid ] .the success rate of acquisition is the probability of the proposed scheme to determine the strongest signals , which is shown in the figure against the number of channels and the snr .the conditional rmse is an average error between the true delay - frequency parameters and those associated to the strongest paths of the correctly identified satellites where with and |^2 = \operatorname*{arg\,max}_{q \in \mathcal{q},k \in \mathcal{k } } |y_{i , k , q}[n]|^2\ ] ] are the delay - frequency index pairs of strongest path associated to the satellite .similarly , the average rmse for the doppler is although the compressive acquisition scheme suffers from a compression loss , both fig.[fig1:sm_offtg_uid ] and fig.[fig2:mm_offtg_uid ] highlight its ability to perform closely as the traditional mf . when and db the active satellites can be identified satisfactorily which leads to great savings ( less than % of the original ) .the figures above illustrate acquisition performances using a single set of measurements ] . using a single measurement suffers from a performance loss ( db for at the rate of approximately ) .in fact , by reducing , the accuracy of ] are obtained by post - processing of the digitally sampled versions of at the chip rate and processed using a greedy algorithm orthogonal matching pursuit ( omp ) .note that using analog implementation in the acquisition can further bring down the complexity in terms of processing .we introduce a vector ] , to digitally capture and compress one instance of the signal according to \triangleq \langle x[m ] , \psi_{p}[m - nm ] \rangle.\ ] ] for the mf receiver , instead , we assume an oversampling ratio of to achieve half chip accuracy , i.e. , , and downsize the filterbank array . the sequence ] , of samples each , whose element is \}_{m } \triangleq x(nt + mt_c + ( i-1)t_c/2) ] , necessary to detect the vector elements are recorded and listed in table [ tab1:compx ] .the table outlines both single and multiple ( mmv ) modes for both the mf and cs schemes , and a breakdown of the omp recovery algorithm adopted by the cs receiver .this popular algorithm seeks the ( with ) non - zero elements of the sparse vector ] . at every iterationthe current estimate is subtracted from the observation vector ( * omp.1 * ) and the residual projected onto the dictionary elements ( * omp.2 * ) .then , the dictionary element linked to the largest coefficient ( * omp.3 * ) is retained and removed from the dictionary .the updated set of coefficients is obtained by projecting ] & & eq . + covariance ( optional ) & & mmv mode + svd of ( optional ) & & , mmv mode + residual update & & + inner products & & + maximum projection & & + least - squares projection & & + stopping criterion & & + * mf receiver * & * complexity * & * remarks * + correlations ] is sufficiently sparse , i.e. for gps applications , the number of operations needed to digitally compress ] and to project the residual of each omp iteration onto the dictionary ( * omp.2 * ) dominate the overall complexity of the cs receiver , leading to an order of . on the other hand ,the number of operations for the mf are mainly determined by the number of additions to compute the correlations , leading to . for hardware implementation this is attractive since the filterbank processing does not require complex multiplications .however , a similar saving can be added to the cs receiver by appropriately designing such that \} ] for the compressive scheme , with ( above ) and ( below ) as a function of , and compared against the mf receiver .each curve was run separately on a 64-bit i7 920 cpu running at 2.67 ghz.,width=288,height=240 ] when the ratio remains unchanged since all the additional steps ( table [ tab1:compx ] ) for the rembo technique require marginal increase of operations . when compared to , the mf spends more cpu time to accumulate the correlation outputs whereas the cs receiver experiences a reverse trend .the additional effort\} ] together with the spectrum \mathrm{rect}_{2\pi l / t_c}(\omega)]th entry of the matrix of over ] is a pseudo - correlation between the sequence \} ] being perturbed by phase - shifts determined by the mismatch of the doppler shift . based on the results in and and taking into account that |=1 ] to decay rapidly over such that dominant values only appear when there is a frequency component . by choosing , , we have ,\end{aligned}\ ] ] which results in a simplified expression of the pseudo - correlation as follows & = r_{i'i}[u]\delta[k - k_{i , r } ] + \mathcal{o}(1/m),\end{aligned}\ ] ] where ] and ignoring higher order perturbations , the non - zero entries of the matrix are explicitly written as {(i',k_{i , r},q),(i , r ) } & = \frac{m}{t}e^{\mathrm{i}\omega(q - q_{i , r})t_c}e^{-\mathrm{i}k_{i , r}(q - q_{i , r})\delta\omega t_c}\sum_{\ell=0}^{lm-1 } e^{-\mathrm{i}\frac{2\pi\ell}{t}(q - q_{i , r})t_c } \\ & ~~~+ \mathcal{o}\left(\epsilon_g(\omega)\right)+ \mathcal{o}\left(\epsilon_{i',i}(\omega)\right ) + \mathcal{o}(1).\end{aligned}\ ] ] with , we use the property \end{aligned}\ ] ] to further express the non - zero entries of at , and {(i , k_{i , r},q_{i , r}),(i , r ) } = lm + \mathcal{o}(\epsilon_g(\omega ) ) + \mathcal{o}\left(\epsilon_{i , i}(\omega)\right ) + \mathcal{o}(1),\quad \omega\in[-\pi / t,\pi / t].\end{aligned}\ ] ] on the other hand , the matrix can be expressed element - wise as {(i',k',q'),(i , k , q ) } & = \frac{1}{t}\sum_{\ell=0}^{lm-1 } \phi_{i'}^\ast\left(\omega - k'\delta\omega-\frac{2\pi \ell}{t}\right)e^{\mathrm{i}\left(\omega - k'\delta\omega-\frac{2\pi \ell}{t}\right)q't_c}\\ & ~~~~~~~\times \phi_i\left(\omega - k\delta\omega-\frac{2\pi \ell}{t}\right ) e^{-\mathrm{i}\left(\omega - k\delta\omega-\frac{2\pi \ell}{t}\right)qt_c}.\end{aligned}\ ] ] similarly , the expression is significant only when {(i',k , q'),(i , k , q ) } & = \frac{1}{t}e^{\mathrm{i } \omega(q'-q)t_c}e^{\mathrm{i}k(q - q')\delta\omega t_c } \sum_{\ell=0}^{lm-1}e^{\mathrm{i } \frac{2\pi \ell}{t } ( q - q')t_c } + \mathcal{o}(\epsilon_g(\omega ) ) + \mathcal{o}(\epsilon_{i',i}(\omega ) ) + \mathcal{o}(1).\end{aligned}\ ] ] when , according to , we have {(i',k , q),(i , k , q ) } & = lm + \mathcal{o}(\epsilon_g(\omega ) ) + \mathcal{o}(\epsilon_{i',i}(\omega ) ) + \mathcal{o}(1).\end{aligned}\ ] ] since the error functions satisfy and , the results in theorem 1 follow .s. f. cotter , b. d. rao , k. engan and k. kreutz - delgado,sparse solutions to linear inverse problems with multiple measurement vectors , " _ ieee trans ._ , vol . 53 , no . 7 , pp . 2477 - 2488 , jul . 2005 .j. chen and x. huo , theoretical results on sparse representations of multiple - measurement vectors , " _ ieee trans .sig . process .12 , pp . 4634 - 4643 , dec . 2006 .r. calderbank , s. howard and s. jafarpour , construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property " , to appear _ ieee j. of select .topics in sig .process._. i. jovanovic and b. beferull - lozano , oversampled a / d conversion and error - rate dependence of nonbandlimited signals with finite rate of innovation , " _ ieee trans .on sig . process .6 , pp . 2140 - 2154 , jun .y. c. pati and r. rezaiifar and p. s. krishnaprasad , orthogonal matching pursuit : recursive function approximation with applications to wavelet decomposition " , _ asilomar conference on signals , systems and computers _ , vol .40 - 44 , 1993 .
|
in this paper , we propose an efficient acquisition scheme for gps receivers . it is shown that gps signals can be effectively sampled and detected using a bank of randomized correlators with much fewer chip - matched filters than those used in existing gps signal acquisition algorithms . the latter use correlations with all possible shifted replicas of the satellite - specific c / a code and an exhaustive search for peaking signals over the delay - doppler space . our scheme is based on the recently proposed analog compressed sensing framework , and consists of a multichannel sampling structure with far fewer correlators . the compressive multichannel sampler outputs are linear combinations of a vector whose support tends to be sparse ; by detecting its support one can identify the strongest satellite signals in the field of view and pinpoint the correct code - phase and doppler shifts for finer resolution during tracking . the analysis in this paper demonstrates that gps signals can be detected and acquired via the proposed structure at a lower cost in terms of number of correlations that need to be computed in the coarse acquisition phase , which in current gps technology scales like the product of the number of all possible delays and doppler shifts . in contrast , the required number of correlators in our compressive multichannel scheme scales as the number of satellites in the field of view of the device times the logarithm of number of delay - doppler bins explored , as is typical for compressed sensing methods . gps , compressive sensing , spread spectrum , analog compressed sensing
|
one of the most fundamental features of quantum mechanics is the fact that it is impossible to prepare states that have sufficiently precise simultaneous values of incompatible observables .the most well - known form of this statement is the position - momentum uncertainty relation , which places a lower bound on the product of standard deviations of the position and momentum observables , for a particle in any possible quantum state .this relation was first formalised by kennard during the formative years of quantum mechanics following heisenberg s discussion of his `` uncertainty principle '' .this `` uncertainty relation '' was quickly generalised by robertson to arbitrary pairs of incompatible ( i.e. , non - commuting ) observables and into what is now the textbook uncertainty relation .let and be two observables and =ab - ba ] , then robertson s uncertainty relation can be expressed as }\right|.\ ] ] these uncertainty relations express a quantitative statement about the measurement statistics for and when they are _ measured many times , separately , on identically prepared quantum systems_. such relations are hence sometimes called _ preparation uncertainty relations _, since they propose fundamental limits on the measurement statistics for any state preparation .this is in contrast to heisenberg s original discussion of his uncertainty principle which he expressed as the inability to _ simultaneously measure _ incompatible observables with arbitrary accuracy . as such , quantum uncertainty relations have a long history of being misinterpreted as statements about joint measurements .it is only much more recently that progress has been made in formalising _measurement uncertainty relations _ that quantify measurement disturbance in this way , although there continues to be some debate as to the appropriate measure of measurement ( in)accuracy and of disturbance .the recent interest in measurement uncertainty relations has highlighted an oft - overlooked aspect of robertson s inequality : its _ state dependence_. indeed , the right - hand side of eq .depends on the expectation value }\right| ] and ={\mathbf{b}}\cdot { \mathbf{r}} ] and are the commutator and anticommutator , is the kronecker delta , and are antisymmetric and symmetric structure constants of , respectively , and where the summation over repeated indices is implicit . for qutritsthe operators to are the gell - mann matrices , for example . as for the two - dimensional case , taking we can write any traceless observable as and thus represent it by its generalised bloch vector .an arbitrary state can similarly be written in terms of its generalised bloch vector as where now ( with equality for pure states ) . however , for it is _ not _ the case that _ any _ vector with represents a valid quantum state : the set of valid quantum states ( i.e. , the bloch vector space ) is a strict subset of the unit sphere in dimensions .the expectation value of in the state can still be expressed as , so that lemma [ lemma : generalform ] , with and defined as before , remains valid for higher - dimensional systems .this allows one to derive state - independent relations for the expectation values , as we did for qubits in section [ sec : uncertainties ] .note that these relations may not be tight , as the vectors saturating may not correspond to valid quantum states . for _ binary - valued _ measurements ,is state - dependent for , so it also fails to give the type of state - independent uncertainty relation we would like in this scenario . ]contrary to the case of qubits however , for one can not directly translate these relations to express them solely in terms of standard deviations or entropies ; indeed , because of the larger number of eigenvalues and of the geometry of the generalised bloch sphere , the expectation value does not contain all the information about the uncertainty of .for instance , the relation between , and is not simply given by eq .( where ) , but by where is a -dimensional vector with components .thus , the uncertainty of for a state no longer depends only on the angle between the bloch vectors and .furthermore , in contrast to qubit operators , for three - or - more - level systems there are pairs of non - commuting observables that have a common eigenstate and hence can simultaneously have zero variance .in general , there is no simple analytical description of the bloch space for qudits , so it seems implausible to give generalised forms of tight , state - independent uncertainty relations for such systems . for certain choices of and a complete analysis of the set of obtainable values for and is nevertheless tractable ( at least for qutrits ) , and it is possible to give tight state - independent uncertainty relations .similarly , for well chosen and a more general higher - dimensional analysis is possible if one is prepared to settle for relations that are not tight . in ref . , for example , such behaviour is analysed for angular momentum observables in orthogonal directions . such an approach , however , lacks the generality of the approach possible for qubits , and is necessarily , at least in part ,by exploiting the relationship between the expectation values of pauli observables and standard measures of uncertainty ( such as the standard deviation ) , we have derived tight state - independent uncertainty relations for pauli measurements on qubits .these uncertainty relations completely characterise the allowed values of uncertainties for such observables . furthermore ,we give the bounds on all these relations in terms of the norm of the bloch vector representing the state which is directly linked to the purity of the state so that , if a bound on this is known , tighter ( partially state - dependent ) uncertainty relations can be obtained ; for pure states and the most general form is recovered .the approach we take is general and , although we explicitly give tight uncertainty relations for arbitrary pairs and triples of pauli observables , it can be used to give tight uncertainty relations for sets of arbitrarily many observables .while we have focused on giving these uncertainty relations in terms of the standard deviations and variances of the observables , we showed how these can easily be rewritten in terms of shannon entropies to give tight entropic uncertainty relations , and did so explicitly for pairs of observables .these relations can furthermore be translated into uncertainty relations for any measure of uncertainty that depends only on the probability distribution , , of an observable for a state , such as rnyi entropies .indeed , one may reasonably argue that the product is the _ only _ parameter that an uncertainty measure for can depend on , and thus that our approach covers all kinds of preparation uncertainty relations for qubits .although we have given explicit uncertainty relations only for pauli observables , it is simple to extend them to arbitrary qubit measurements .to do so , note that one can write any observable in a two - dimensional hilbert space as , with and .assuming ( as otherwise is simply proportional to the identity operator and one trivially has for all states ) , the observable with is a pauli observable , and we have and .one can thus give an uncertainty relation involving by writing the corresponding relation that we derived for , and then replacing by and or by the appropriate expression given above ; one can proceed similarly for the other observables in question .finally we note that , although we have not done so here , it is also possible to go beyond projective measurements and give similar relations for positive - operator valued measures ( povms ) for qubits with binary outcomes .the two elements of any such povm can be written in the form 12 & 12#1212_12%12[1][0] * * , ( ) link:\doibase 10.1007/bf01397280 [ * * , ( ) ] , link:\doibase 10.1103/physrev.34.163 [ * * , ( ) ]link:\doibase 10.1103/physreva.67.042105 [ * * , ( ) ] link:\doibase 10.1103/physreva.69.052113 [ * * ( ) , 10.1103/physreva.69.052113 ] link:\doibase 10.1073/pnas.1219331110 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.160405 [ * * , ( ) ] ( ) link:\doibase 10.1103/physreva.89.022106 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.86.1261 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreva.90.012332 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.260401 [ * * , ( ) ] link:\doibase 10.2307/2372390 [ * * , ( ) ] * * , ( ) \doibase http://dx.doi.org/10.1103/physrevlett.60.1103 [ * * , ( ) ] ( ) link:\doibase 10.1016/0375 - 9601(90)90460 - 6 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.77.042110 [ * * , ( ) ] ( ) link:\doibase 10.1103/physreva.86.024101 [ * * , ( ) ] link:\doibase 10.1038/srep12708 [ * * , ( ) ] link:\doibase 10.1103/physreva.89.012129 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/17/9/093046 [ * * , ( ) ] _ _ ( , , ) `` , '' ( ) , link:\doibase 10.1103/physreva.89.022124 [ * * , ( ) ] link:\doibase 10.1103/physreva.68.032103 [ * * , ( ) ] link:\doibase 10.1088/1751 - 8113/41/23/235303 [ * * , ( ) ] link:\doibase 10.1016/s0375 - 9601(03)00941 - 1 [ * * , ( ) ] and , eds ., _ _ ( , , )
|
the well - known robertson schrdinger uncertainty relations have state - dependent lower bounds which are trivial for certain states . we present a general approach to deriving tight state - independent uncertainty relations for qubit measurements that completely characterise the obtainable uncertainty values . this approach can give such relations for any number of observables , and we do so explicitly for arbitrary pairs and triples of qubit measurements . we show how these relations can be transformed into equivalent tight entropic uncertainty relations . more generally , they can be expressed in terms of any measure of uncertainty that can be written as a function of the expectation value of the observable for a given state .
|
the performance of wireless communication systems is significantly affected by the interference since resources such as time , frequency , and space are often shared .the two - user interference channel , which is the smallest multiple source - destination pair communication channel with interference , has received a great deal of attention . however , single - letter capacity expressions for discrete memoryless and gaussian interference channels are still unknown in general and known only for some limited cases . for strong and very strong interference channels , the capacity region is known in general .recently , the sum - rate capacity for the very weak ic has been discovered based on genie - aided outer bounding . motivated by the sum - rate capacity for the very weak ic , the capacity region for general gaussian interference channel is characterized to within one bit in .recently , many people have focused on the interference relay channel , which is an interference channel with a relay helping the communication . for the interference relay channelwhere the relay s transmit signal can be heard from only one source , the capacity is known under a certain condition using an interference forwarding scheme .rate splitting and decode - and - forward relaying approach is considered in , showing that transmitting the common message only at the relay achieves the maximum achievable sum rate for symmetric gaussian interference relay channels . for a gaussian interference channel with a cognitive relay having access to the messages transmitted by both sources , proposed a new achievable rate region . in network information theory , it is typically assumed that the relay s transmit symbol depends only on its past received symbols. however , sometimes it makes more sense to assume that the relay s current transmit symbol depends also on its current received symbol , which is a better model for studying af type relaying if the overall delay spread including the path through the relay is much smaller than the inverse of the bandwidth .this channel has recently received attention .relay - without - delay channel is a single source - destination pair communication system with a relay helping the communication such that it encodes its transmit symbol not only based on its past received symbols but also on the current received symbol . for the gaussian relay - without - delay channel , instantaneous amplify - and - forward relaying achieves the capacity if the relay s transmit power is greater than a certain threshold . in this paper, we consider interference relay - without - delay channel where two source nodes want to transmit messages to their respective destination nodes and a relay without delay helps their communication .we study both discrete memoryless and gaussian memoryless interference relay - without - delay channels .we present an outer bound using genie - aided bounding for the discrete memoryless interference relay - without - delay . for the gaussian interference relay - without - delay channel ,we define strong and very strong gaussian interference relay channels motivated by the gaussian strong and very interference channels . for strong and very strong gaussian interference relay - without - delay channels ,we show new outer bounds using genie - aided outer bounding such that both receivers know the received sequence at the relay .we also propose an achievable scheme using gaussian codebooks , simultaneous non - unique decoding , and instantaneous amplify - and - forward relaying .the complexity of the proposed relaying is very low compared to other more complicated relaying schemes including decode - and - forward and compress - and - forward since it only has symbolwise operations . despite its simplicity , we show that it can achieve the capacity exactly when the relay s transmit power is greater than a certain threshold for the very strong case . for the strong case ,the same is true if some additional conditions are satisfied .this is surprising since it means that such a simple symbolwise relaying can be optimal in a complicated communication scenario where there are two source - destination pairs interfering with each other .the proposed scheme would be practically useful since it can be optimal and at the same time it is very simple .this paper is organized as follows : in section 2 , we describe the discrete memoryless interference relay - without - delay channel model and derive an outer bound . in section 3 , we define strong and very strong gaussian interference relay - without - delay channel and propose outer bounds for each case .we propose an achievable scheme based on instantaneous amplify - and - forward relaying and show that this achievable scheme achieves the capacity for strong and very strong gaussian interference relay - without - delay channels under certain conditions . to show the existence of strong and very strong gaussian interference relay - without - delay channels , we provide some examples .the discrete memoryless interference relay - without - delay channel consists of two senders , two receivers , and a set of conditional probability mass functions on , one for each .it is denoted by .then , the interference relay channel can be depicted as in figure [ figure 3.1 ] .a code for the discrete memoryless interference relay - without - delay channel consists of two message sets \triangleq \{1,2,\ldots,2^{nr_1}\} ] , two encoders that assign codewords and to each messages ] , respectively .the encoding function of a relay is defined as , for . for decoding , decoderi assigns a message or an error to each received sequence , for .the average probability of error is defined as .a rate pair is said to be achievable if there exists a sequence of codes such that .the capacity region of the discrete memoryless interference relay - without - delay channel is defined as the closure of the set of achievable rate pairs . for the discrete memoryless interference relay - without - delay channel ,we get the following outer bound on the capacity region . for the discrete memoryless interference relay - without - delay channel ,the capacity region is contained in the set of rate pairs such that for some .the joint probability mass function ( pmf ) induced by the encoders , relay , and decoders can be written in the following form : from fano s inequality , we get where tends to zero as .an upper bound on can be expressed as follows : where ( a ) follows from fano s inequality , ( b ) and ( c ) follow since is a function of for , respectively and , and ( d ) holds since and . in the above, is a time sharing random variable such that ] , } ] and .for this gaussian interference relay - without - delay channel , very strong gaussian interference relay - without - delay channel condition is satisfied since for this channel , ( [ eq : pr ] ) can be expressed as follows : }h_{22}h_{r1}-h_{1r}^{[1]}h_{11}h_{r2})^2+(h_{2r}^{[2]}h_{22}h_{r1}-h_{1r}^{[2]}h_{11}h_{r2})^2}{(h_{1r}^{[2]}h_{2r}^{[1]}-h_{1r}^{[1]}h_{2r}^{[2]})^2 h_{11}^2 h_{22}^2 } \left((h_{r1}^2+h_{r2}^2)p+1\right)=\frac{2}{3 } \leq p_r\end{aligned}\ ] ] for , the capacity is given as follows : a discrete memoryless interference relay - without - delay channel is said to be strong interference relay - without - delay channel if for all .for the gaussian interference relay - without - delay channel , the strong interference relay - without - delay channel condition is equivalent to and . can be expressed as follows : where ( a ) holds since and ( b ) holds since is independent of .similarly , , , and can be expressed as follows : if and , then the gaussian bc s to ] are both degraded , and hence are more capable .thus , we get and for all . thus , we get and for all . to prove the other direction , assume that and . assuming and , we get and . for the gaussian interference relay - without - delay channel with strong interference , for all and for all for all .for , lemma 2 holds since where ( a ) holds since and ( b ) holds from the strong interference condition .now assume that lemma 2 holds for , i.e. , for all and for all .let and where , .then , the above condition is equivalent to the following : for all since .this implies that for all .then can be expressed as follows : where ( a ) holds since and translation property of differential entropy , ( b ) holds since , , , and , ( c ) follows from the assumption since , , and the memoryless channel property , and ( d ) follows from the strong interference condition since and .similarly , we can prove that .using lemma 2 , we get the following outer bound on the capacity region for the strong gaussian interference relay - without - delay channel . for the strong gaussian interference relay - without - delay channel ,the capacity region is contained in the set of rate pairs such that from fano s inequality , we get where tends to zero as .an upper bound on can be expressed as follows : where ( a ) follows from fano s inequality , ( b ) follows from lemma 1 , ( c ) holds since , and ( d ) holds since and . in the above, is a time sharing random variable such that ] and .for this gaussian interference relay - without - delay channel , the strong interference gaussian relay - without - delay channel condition is satisfied since furthermore , we have for this channel , ( [ eq : pr2 ] ) can be expressed as follows : }h_{22}h_{r1}-h_{1r}^{[1]}h_{11}h_{r2})^2+(h_{2r}^{[2]}h_{22}h_{r1}-h_{1r}^{[2]}h_{11}h_{r2})^2}{(h_{1r}^{[2]}h_{2r}^{[1]}-h_{1r}^{[1]}h_{2r}^{[2]})^2 h_{11}^2 h_{22}^2 } \left((h_{r1}^2+h_{r2}^2)p+1\right)=\frac{2}{3 } \leq p_r\end{aligned}\ ] ] for , the capacity is given as follows : this paper , we studied the interference relay - without - delay channel .the main difference between the interference relay - without - delay channel and the conventional interference relay channel is in that the relay s current transmit symbol depends on its current received symbol as well as on the past received sequences . for this channel, we defined very strong and strong gaussian interference relay - without - delay channel motivated by the strong and very strong gaussian interference channels .we presented new outer bounds using genie - aided outer bounding such that both receivers know the received sequences of the relay . using the characteristics of the strong interference relay - without - delay channel , we suggested a tighter outer bound for the strong gaussian interference relay - without - delay channel .we also proposed an achievable scheme based on instantaneous amplify - and - forward relaying .surprisingly , the proposed instantaneous amplify - and - forward relaying scheme actually achieves the capacity exactly under certain conditions for both very strong and strong gaussian interference relay - without - delay channels despite its simplicity .the proposed scheme would be practically useful since it can be optimal and at the same time it is very simple .x. shang , g. kramer , and b. chen , `` a new outer bound and the noisy - interference sum - rate capacity for gaussian interference channels , '' _ ieee trans .inf . theory _ ,2 , pp . 689 - 699 , feb .2009 .s. sridharan , s. vishwanath , s. a. jafar , and s. shamai(shitz ) , `` on the capacity of the cognitive relay assisted gaussian interference channel , '' in _ proc .ieee international symposium on information theory _ , 2008 .n. lee and s. a. jafar `` aligned interference neutralization and the degrees of freedom of the 2 user interference channel with instantaneous relay , '' _ preprint available on arxiv arxiv:1102.3833 _ , feb . 2011 .
|
in this paper , we study the interference relay - without - delay channel which is an interference channel with a relay helping the communication . we assume the relay s transmit symbol depends not only on its past received symbols but also on its current received symbol , which is an appropriate model for studying amplify - and - forward type relaying when the overall delay spread is much smaller than the inverse of the bandwidth . for the discrete memoryless interference relay - without - delay channel , we show an outer bound using genie - aided outer bounding . for the gaussian interference relay - without - delay channel , we define strong and very strong interference relay - without - delay channels and propose an achievable scheme based on instantaneous amplify - and - forward ( af ) relaying . we also propose two outer bounds for the strong and very strong cases . using the proposed achievable scheme and outer bounds , we show that our scheme can achieve the capacity exactly when the relay s transmit power is greater than a certain threshold . this is surprising since the conventional af relaying is usually only asymptotically optimal , not exactly optimal . the proposed scheme can be useful in many practical scenarios due to its optimality as well as its simplicity . interference channel , interference relay channel , interference relay - without - delay channel , and amplify - and - forward relying
|
while feedback can not increase the capacity of a point - to - point memoryless channel , it can decrease the probability of error as well as the complexity of the encoder and decoder . for an awgn channel without feedback , it is known that the decay in the probability of error as a function of the blocklength is at most exponential in the absence of feedback ( i.e. the lowest achievable probability of error has the general form ) . , is equivalent to , and is equivalent to .] however , when a noiseless delay - less infinite capacity feedback link is available , a simple sequential linear scheme ( the schalkwijk - kailath scheme ) can achieve the capacity of this channel with a doubly exponential decay in the probability of error as a function of the blocklength ( i.e. it has the general form ) . this shows the significant role of feedback in reducing the probability of error .the schalkwijk - kailath scheme requires a noiseless feedback link with infinite capacity .in fact , the schalkwijk - kailath scheme does not provide the best possible error decay rate given such an ideal feedback link .in particular , it is shown in that in the presence of an ideal noise - free delay - less feedback link , the capacity of the awgn channel can be achieved with a probability of error that decreases with an exponential order which is linearly increasing with blocklength ( i.e. it has the general form ) . is used to denote function composition . ] however , once the feedback channel is corrupted with some noise , the benefits of feedback in terms of the error probability decay rate can drop .in fact , when this corruption corresponds to an additive white gaussian noise on the feedback channel , the schalkwijk - kailath communication scheme ( or any other linear scheme ) fails to achieve any nonzero rate with vanishing error probability .furthermore , in this case , the achievable error decay for any coding scheme can be no better than exponential in blocklength , similar to the case without feedback . in this work ,we consider a case where the feedback link is noiseless and delay - less but rate - limited .the advantages of rate - limited feedback in reducing the coding complexity are investigated in . in this paper, we study the benefits of rate limited feedback in terms of decreasing the error probability . assuming a positive and feasible ( below capacity ) rate is to be transmitted on the forward channel , we characterize the achievable error decay rates in two cases : the case where the feedback rate , , is lower than , and the case where . for the first scenario , we show that the best achievable error probability decreases exponentially in the code blocklength ( i.e. ) and provide an upper bound for the error exponent . for the second scenario , we propose an iterative coding scheme which achieves a doubly exponential error decay ( i.e. ) .since a feedback rate equal to the data rate is sufficient for achieving a doubly exponential error decay , one might suspect that further increasing the feedback rate may not lead to a significant gain .we dispel this suspicion by generalizing our proposed iterative scheme to show that if , an order exponential decay is achievable .the latter result is consistent with , in which the achievable error probabilities are characterized in terms of the number of times the ( infinite capacity ) feedback link is used .interestingly , our results show that the error exponent as a function of the feedback rate has a strong discontinuity at the point ; it is finite for and infinite for ( due to the achievability of a doubly exponential error decay ) .although only can lead to a super - exponential error decay , even for smaller feedback rates , we expect to have a strictly higher error decay rate as compared to the case with no feedback .in particular we show that for , the error exponent is at least higher than the error exponent in the absence of feedback .the problem of communication over the awgn channel with limited feedback has been previously considered assuming different types of corruption on the feedback channel . in particular , the corruption on the feedback channel has been modeled as additive gaussian noise in and and as quantization noise in .another type of feedback corruption has been considered in where only a subsequence of the channel outputs can be sent back noiselessly to the transmitter .a fundamental distinction between our model and the ones considered above is that in our model the receiver has `` full control '' over what is transmitted and received on the feedback link .this is due to the fact that under the rate - limited feedback scenario , the feedback link is assumed to be both noiseless and active in the sense that at each time , the feedback message is allowed to be an encoded function of all the information available at the receiver at that time .communication with imperfect feedback has also been investigated in , and for variable - length coding strategies .our model on the other hand captures a scenario where the blocklength and therefore the decoding delay is fixed .the rest of this paper is organized as follows : in section ii we present the system model and the problem formulation . in section iiiwe consider the case where the feedback rate is higher than the forward rate .specifically , using a simple iterative coding scheme we show the achievability of an order exponential error decay when . in sectioniv we consider the case where and show that in this case the decay in probability of error is at most exponential ( finite first order error exponent ) . although a feedback rate less than can not provide super - exponential error decay , we will show in section v that it increases the error exponent by at least .section vi shows that the necessary and sufficient conditions for super - exponential error decay remain the same even if we express the feedback limitation as a constraint on the per channel use feedback rate instead of the average feedback rate .finally , section vii concludes the paper .+ * notation . * throughout this paper we represent the norm operator by and the expectation operator by $ ] .the notation `` '' is used for the natural logarithm , and rates are expressed in nats .the complement of a set is denoted by .we denote the indicator function of the event by . given a function , is equivalent to . given a function and a positive integer , the iterate of the function , i.e. , is denoted by .we consider communication over a block of length through an awgn channel with rate - limited noiseless feedback .the channel output at time is given by where is a white gaussian noise process with and is the channel input at time .the finite - alphabet feedback signal at time is denoted by and is assumed to be decoded at the transmitter ( of the forward channel ) without any error or delay .we will denote the feedback sequence alphabet by .the message to be transmitted ( on the forward link ) is assumed to be drawn uniformly from the set .an encoding strategy is comprised of a sequence of functions where determines the input as a function of the message and the feedback signals received before time , the feedback strategy consists of a sequence of functions where determines the feedback signal as a function of the channel outputs up to time , the decoding function gives the reconstruction of the message after receiving all the channel outputs the probability of error for message is denoted by , where the average probability of error is defined as given the above setup , a communication scheme with forward rate , feedback rate and power level is comprised of a selection for the feedback sequence alphabet , the encoding strategy , the feedback strategy and the decoding function , such that & \leq & np,\end{aligned}\ ] ] where the expectation is with respect to the messages and the noise .over all such communication schemes , we represent the one with minimum average probability of error with the tuple and denote the corresponding minimum error probability by . in the case where the feedback rate is zero ,we simply drop the feedback rate term and use and to represent the optimal non - feedback code and the corresponding error probability , respectively .the capacity of the awgn channel is denoted by , where for the communication system described above , the first order error exponent or simply the error exponent is defined as where a positive value of the error exponent implies that the error decay rate is at least exponential .we also define higher order error exponents .in particular , given , the order error exponent is defined as given the above definitions , a communication system with strictly positive order error exponent has an order exponential error decay ( i.e. ) .when the feedback rate is higher than the forward rate , we can achieve a super - exponential ( in blocklength ) error decay . this result is presented in the following theorem .[ thm : r_rfb_1 ] for any which satisfies and , a strictly positive second order error exponent is achievable : see appendix. the above result can be further generalized as follows .[ thm : r_rfb_l ] given an integer , for any which satisfies and , a strictly positive order error exponent is achievable : see appendix .we use a class of simple iterative coding schemes to prove the above achievability results .in particular , to achieve a doubly exponential error decay we propose a multi - phase coding scheme as follows : in the first phase , called the initial transmission , the message is sent using a non - feedback code that occupies a big portion of the transmission block ( out of ) . in the second phase , called the intermediate decoding / feedback phase , the receiver decodes the message based on the received signals and feeds back the decoded message to the transmitter , using nats of the available feedback . depending on the validity of the decoded message the transmitter decides to stay silent or perform boosted retransmission . in the casethe message is decoded correctly , the transmitter stays silent during the rest of the transmission time .otherwise , it sends a sign of failure in the next ( ) transmission and uses the remaining portion of the transmission block ( ) to send the message with an exponentially ( in block length ) high power .while retransmission with such a large power guarantees a doubly exponential error decay , it does not violate the power constraint since the probability of incorrect decoding in the second phase is exponentially ( in block length ) low . to guarantee an exponential decay whenthe available feedback rate is , for some integer , the above scheme can be modified to include rounds of intermediate decoding / feedback and boosted retransmission , where retransmission at each round , if needed , is done with exponentially higher power than the previous retransmission .note that in comparison with the schalkwijk - kailath ( sk ) scheme presented in , the above iterative technique needs less feedback ( nats instead of the infinite rate required by the sk scheme ) and provides better error decay rate .in the previous section we have shown that by utilizing a feedback link with a rate higher than the forward rate , we can reduce the error probability significantly as compared to the case with no feedback .the high reliability of the iterative scheme presented in the last section is due to the fact that the initial decoding error at the receiver ( which is a rare event ) is perfectly detectable at the transmitter .therefore it can be corrected by retransmitting the message with high power without violating the average power constraint .the perfect error detection at the transmitter is obtained from the feedback of the initial decoded message at the receiver . however ,when the feedback rate is lower than the forward rate , the receiver has to use a source code to compress its decoded message before feeding it back .the transmitter must then reconstruct the uncompressed decoded message to detect any error .since this reconstruction involves some first order exponential ( in blocklength ) error decay ( corresponding to the source coding error exponent ) , the error detection is erroneous with the same decay rate .therefore , the mis - detection of the receiver error due to the compression on the feedback link dominates the error probability . while the above intuitive explanation justifies the failure of the block retransmission schemes in achieving a super - exponential error decay , one might still hope that such a decay rate can be achieved using other schemes .for example one alternative is to look at the problem from a stochastic control point of view and use a rate - limited variant of the recursive feedback schemes presented in and . in this section ,we show that no matter what communication scheme is used , one can not achieve infinite first order error exponent .[ thm : r_fb_r ] given , the first order error exponent is upper bounded by where and is the solution to see appendix .the proof , which is rather lengthy , can be explained using the following observation .it is shown in that given a peak power constraint , the best achievable error decay is exponential .therefore , in order to achieve a super - exponential error decay , the transmitter should be able to boost the power under certain circumstances .however , given the expected power constraint , the power can be boosted only under rare occasions where the receiver would decode wrongly otherwise . therefore , there should be enough feedback bits to communicate the occurrence of those rare occasions to the sender .it turns out that this requirement is met only if the number of possible feedback messages ( ) is at least as large as the number of forward messages ( ) .note that the error exponent upper bound provided in the above theorem stays bounded as approaches from below . on the other hand , we showed in the previous section that for any feedback rate higher than , the error exponent is infinite ( doubly exponential decay ) .these two facts lead to an interesting conclusion : the error exponent as a function of the feedback rate has a sharp discontinuity at the point .the above theorem provides an upper bound on the first order error exponent for feedback rates below .we conjecture that a similar result may be obtained on the boundedness of the order error exponent for feedback rates below .we have shown in the previous section that the probability of error when can not decay faster than exponential as a function of the blocklength .although the feedback in this case does not provide an infinite error exponent , we still expect that the error exponent should be improved in the presence of feedback as compared to the non - feedback scenario . in this sectionwe will show that the error exponent with feedback is at least above the non - feedback error exponent .the main result of this section is the following theorem .[ thm : r_fb_r_lb ] for all rates , such that , the error exponent is lower bounded as follows where is the error exponent for the awgn channel in the absence of feedback .see appendix .the achievability scheme for the above result is constructed using the multi - phase scheme proposed in the proof of theorem [ thm : r_rfb_1 ] , in conjunction with a compression technique to reduce the rate of feedback in the intermediate decoding / feedback phase from to .using such a scheme , the error probability is dominated by the probability of error mis - detection .this error term is the product of the probability of error in the initial transmission phase ( ) and the probability ( ) that the compression loss hides this event from the transmitter .in the previous sections we focused on a scenario where the _ average _ rate over the whole transmission block was constrained to be lower than . under that constraint , the receiver can use the available feedback ( nats ) any time during the transmission . in particular , using the coding scheme proposed in section iii , the receiver collects all the feedback bits and uses them in one feedback transmission at the end of the first phase . in this section we consider a _ per channel use _ feedback rate constraint . under this constraint, the receiver can not feed back more than nats after each channel use .this translates to the following constraint on the size of the feedback signal alphabet at each time : given the per channel use feedback constraint , if and , a strictly positive order error exponent is achievable : see appendix .the above result is proved using a combination of the scheme presented in section iii and a block markov coding scheme which is described in the appendix .figure [ fig : blockmarkov ] illustrates an example of this iterative coding scheme for the case where .we considered the impact of rate - limited noiseless feedback on the error probability in awgn channels .we first showed that if the feedback rate that exceeds the rate of the data transmitted on the forward channel , one can achieve a super - exponential decay in probability of error as a function of the code blocklength .our achievability result is based on a multi - phase scheme in which an initial transmission of the message , if decoded incorrectly , is followed by the retransmission of the message with boosted power .a key requirement in this scheme is for the transmitter to perfectly detect the error in the initial transmission every time it happens . the minimum feedback rate required to perfectly communicate the initial decoded message is and therefore our scheme fails to achieve a super - exponential error decay for .we showed that this is true for any scheme .that is , is also a necessary condition for achieving a super - exponential error decay . while we provided an upper bound for the error exponent when , we also showed that even in this case , the use of feedback increases the error exponent by at least . for the casein which , for some positive integer , we generalized our multi - phase iterative scheme to prove the achievability of an fold exponential ( in blocklength ) error decay .the above results are illustrated in figure [ fig : cartoon ] .it can be seen that the error exponent as a function of the feedback rate has a sharp discontinuity at .we showed that the above necessary and sufficient condition for achieving a super - exponential error decay holds whether the feedback limitation is expressed as a constraint on the _ average _ feedback rate or on the _ per channel use _ feedback rate . ] .[ fig : cartoon ] note that our results address the asymptotic behavior of the probability of error in terms of the blocklength and therefore may provide limited insight for codes with small blocklength . in particular , for small values of , one might expect the per channel feedback rate constraint to lead to a higher error probability than a scenario with average feedback rate constraint . on the other hand ,the former is a more practical scenario as it implicitly captures the delay associated with sending data on the feedback link . in this paperwe showed the advantages of feedback in terms of improving the decay rate of the error probability .a subject for future research is to explore the other advantages of interactive communication in terms of reducing the coding complexity and energy consumption .one interesting problem to be addressed is how to use rate - limited feedback to construct sk like schemes which do not need complex block encoding decoding .fix such that .define and , where is chosen such that holds for large enough . choose the feedback signal domains as follows we construct two non - feedback codes and , where for , pick the corresponding codeword from and send it in the first channel uses . based on the received signals andusing the optimal non - feedback decoding function for code , the transmitter decodes the message and sends back its decision to the transmitter if , then otherwise , the next input will be and then the codeword corresponding to is picked from the codebook and is transmitted in the remaining transmissions . on the other side , the receiver compares with the threshold . if , then the remaining received signals are ignored and the decoded message in the first try is announced as the final decision if , the receiver decodes the message based on the last received signals and using the optimal non - feedback decoding function for code .the resulting message is then announced as the final decision using the above scheme , the average power used in the forward link will be therefore our scheme satisfies the power constraint .also the average feedback rate is which meets the constraint on the feedback link .there are three cases in which an error can happen .the first case is when the first decoding is correct but the receiver receives a failure signal from the transmitter due to the noise on the transmission .the probability of this event is upper bounded by where is the tail probability of the standard normal distribution .the second case is when the first decoding is wrong but the failure signal is not decoded correctly at the receiver .the probability of this event is upper bounded by the third case is when the first decoding fails and the failure signal is decoded correctly , but the second decoding also fails .the probability of this event satisfies using the exponential upper bound for the , we have where is some constant . by positivity of the error exponent for rates less than the capacity and since , we know that for any , there exists a fixed such that for large enough values of . combining ( [ eq : firsttwo ] ) and ( [ eq : spupperbound ] ) , we obtain which shows the probability of the first two types of errors decays doubly exponentially in the blocklength .it remains to show that the third type of error is also upper bounded by a doubly exponential term . to show that , note that on the right hand side of ( [ eq : thirderror ] ) , the rate is at most times the capacity achieved by snr .however , the snr is exponentially ( in ) higher than for large values of and therefore given ( [ eq : dexp0 ] ) and the above inequality , the proof will be complete if we show that decays doubly exponentially as a function of . to show this, we can use the fact that for communication rates ( in nats / channel use ) less than the following upper bound on error probability holds in the absence of feedback : for any and for large enough values of , where take sufficiently large such that i.e. then using ( [ eq : low ] ) leads to let s partition the whole transmission block into sub - blocks , the first of which has length .we choose the remaining sub - blocks to have equal lengths . in the first sub - block, the transmitter sends the message using the non - feedback gaussian codebook with rate and power .after transmission in the sub - block , the receiver feeds back the message it has decoded within that sub - block .if the decoded message matches the transmitted one , the transmitter stays silent for the rest of the time .otherwise , it sends a failure alarm and retransmits the message in the sub - block using a non - feedback gaussian codebook with rate .the power of the alarm signal and the power of codebook are chosen to be inversely proportional to the probability of decoding error in the first sub - blocks .that is , where is the total probability of error in the first sub - blocks .the -fold exponential error decay can be shown inductively .given that the probability is -fold exponential in terms of the blocklength ( the case of was shown in the previous theorem ) , the power at the sub - block ( if transmission is needed ) is -fold exponential in blocklength .this in turn leads to an -fold exponential error decay at the end of the sub - block .note that both the transmission power and the feedback rate in the above scheme satisfy the problem constraints .let us first introduce some key definitions which will be used in our proof .we define the decoding region for message as also for each feedback signal sequence , let s define the feedback decision region a key quantity in our proof is the joint distribution of the feedback signal sequence and the output sequence given the transmitted message .for simplicity , we drop the subscript and use to denote the density of the output sequence and the feedback sequence conditional on the transmission of the message . defining , we can write where . in this derivation , ( )is a consequence of the probability chain rule .equation ( ) is derived using the fact that for any two random variables and any deterministic mapping , is a markov chain .finally , ( ) is a direct result of the markov chain relationship and also the equation .another quantity of interest will be the probability of using a feedback signal sequence conditional on the transmission of a message , with the above definitions we can now proceed with the proof .suppose the theorem does not hold .that is , let s assume there exists such that the following inequality can hold for arbitrarily large : given such s , the above inequality implies that for at least half of the messages , we have removing the messages which do not satisfy the above , we obtain a codebook with the rate of at least which , for arbitrarily large , is arbitrarily close to .therefore , ( ) implies the existence of a code with rate for which the _ per message error probability _ can be less than its right hand side for arbitrarily large and for some .let us define . to prove the theorem , we will show that there exists such that for any , the inequality can not hold for all messages . let us fix , to be determined later , and assume that for some , there exists a communication scheme for which holds for all .given such a communication scheme , for each , we construct an initial bin including a subset of feedback signal sequences as follows where is a fixed constant , to be determined later . defining as ,we can write in the following algorithm we update the content of each bin sequentially . 1 .start with .2 . pick two distinct messages , such that there exists a feedback sequence where both and include .3 . assuming ( without loss of generality ) , remove from .4 . increase by and set , for all .5 . set .if , go to step , otherwise stop .note that step is feasible since whenever this step is executed the number of non - empty bins are greater than the cardinality of which is .therefore , there should exist at least one feedback sequence which appears in two bins . also note that for any and any integer assume are the messages picked in step 2 and is the sequence removed from the bin in step and at iteration of the above algorithm .given such a -tuple , a major part of the rest of the proof is devoted to obtaining a lower bound for .first for any , let s use the triangle inequality to write similarly , we have combining ( ) , ( ) and the assumption in step 3 of our algorithm that , we have using this inequality and the derivation in ( ) , we have denoting the complement of a set by , we can write where ( ) is due to the fact that and are disjoint sets and the last inequality is a consequence of ( ) . using the assumption and rearranging the above inequality , we can write to complete our lower bound for , in the following , we find a lower bound for the integral in ( ) .first note that since , we can write where ( ) follows from the assumption that ( ) holds for all the messages and the fact that picked in step and at the iteration of the algorithm is in bin and therefore is a member of . also inequality ( ) is secured by the appropriate choice of .now let s define the sphere as where will be determined later .partitioning the set into and and using , we can write where we have used the chernoff bound in the last step . in that inequality defined as where is the semi - invariant moment - generating function of the chi - square distribution corresponding to : =\frac{1}{2}\log(\frac{1}{1 - 2s}).\end{aligned}\ ] ] replacing in and optimizing that equation we obtain which is positive and increasing for all and tends to infinity as .choose such that for some , to be determined later .using and we can write where we guarantee the validity of the last step by the appropriate choice of . now let s derive the lower bound for the integral in as follows the inequality ( ) along with lead to substituting in the above inequality , we obtain by choosing in ( ) small enough such that , we conclude that for any feedback sequence which is dropped in any iteration of our algorithm : the above inequality is sufficient for us to prove the theorem .noting that the cardinality of the set at the end of our algorithm is , we can write \\&=&\sum_{m\in\mathcal{m}}\frac{1}{|\mathcal{m}|}\sum_{u^n\in \mathcal{u}}p(u^n|m ) ||f^{(n)}(m , u^{n})||^2\\ & \geq & \sum_{m\in\mathcal{m}\backslash j}\frac{1}{|\mathcal{m}|}\sum_{u^n\in f_0(m)}p(u^n|m ) ||f^{(n)}(m , u^{n})||^2 \\ \label{eq : powerinset } & \geq & \frac{1}{|\mathcal{m}| } \sum_{m\in\mathcal{m}\backslash j}\sum_{u^n\in f_0(m)}p(u^n|m ) n(p+\frac{\gamma}{8 } ) \\ & = & \frac{n(p+\frac{\gamma}{8})}{|\mathcal{m}| } \sum_{m\in\mathcal{m}\backslash j}{{\rm pr}}\{f_0(m)|m\}. \\ \label{eq : fmprob } & \geq&\frac{n(p+\frac{\gamma}{8})}{|\mathcal{m}| } \sum_{m\in\mathcal{m}\backslash j}(1-\delta ) \\ \label{eq : choosedelta}&\geq & n(p+\frac{\gamma}{16})(1-e^{-n(r - r_{\scriptscriptstyle fb } ) } ) \\& > & np.\end{aligned}\ ] ] in the above derivation , ( ) is obtained using ( ) and the fact that for all , all the s in are removed at the end of the algorithm . also , is a consequence of and is satisfied by choosing .the last inequality is secured by the appropriate choice of .the above inequality shows the conflict of the power constraint and the assumption that can hold for some , where is chosen such that for any given the assumption of , it is clear that there exists such that all the above three inequalities hold and this completes the proof .[ proof of theorem [ thm : r_fb_r_lb ] ] we prove the achievability of the above error exponent using an iterative scheme similar to the one used in the proof of theorem [ thm : r_rfb_1 ] .we use the exact same structure and notation as in the previous iterative scheme and just express the distinctions of this scheme .the main distinction is that here , instead of feeding back the decoded message ( i.e. ) , the receiver sends back a function of its decoded message where is the feedback decision function . after receiving , the transmitter compares the received feedback with the feedback corresponding to the original message and stays silent if otherwise , it sends the failure alarm and retransmits the message with high power exactly similar to what was described in the proof of theorem [ thm : r_rfb_1 ] .considering the range of the feedback function , it is clear that this scheme meets the feedback constraint .also it is easy to show that the power constraint is also met . in particular , note that the probability of retransmission in our scenario is which is less than or equal to and therefore the expected power used here is less than the case considered in theorem [ thm : r_rfb_1 ] .also note that the types of errors seen here include the three types of errors in the earlier case ( false negative , false positive and wrong decoding at the receiver ) plus the error due to the fact that a subset of the decoding errors in the first block are not recognized by the transmitter .that is , the error corresponding to the event which we call an _ error mis - detection event _ , must also be considered as a possible error event .we showed earlier that the algorithm in theorem [ thm : r_rfb_1 ] achieves a doubly exponential error decay , where the error is associated with the first three types of errors .therefore , the probability of error for the current scenario can be upper bounded by the sum of two terms : the probability associated with an error mis - detection event and the probability associated with the other three types of errors : for some . given that the feedback rate is less than the feedforward rate , we expect the error mis - detection event to dominate the total error probability .hence , the proof will be complete if we show that there exists a sequence of feedback encoding functions such that we show the existence of such a feedback encoder sequence using a random coding argument . given and a feedback function , let s define the set for each as we can observe that , in fact , determining the function is equivalent to partitioning into the sets . nowlet s consider all the possible feedback functions for which for all .that is , let s consider all the equal partitionings of the set . from this set of functions ,let s pick the function uniformly randomly and use it as the feedback encoder function .we denote the partitioning associated with by .now let s compute ,\end{aligned}\ ] ] where the expectation is with respect to the randomness in picking the feedback function .we have &=\\ \nonumber & e[\sum_{m=1}^{e^{nr}}{{\rm pr}}\{m { \rm\ is \sent}\ } \sum_{i\in \mathcal{m } , i\neq m } { { \rm pr}}\{\hat{m}_1=i|m { \rm\ is \sent}\ } \mathbf{1}_{\{g^*(i)= g^*(m)\}}]&=\\ \label{eq : randomcoding}&\sum_{m=1}^{e^{nr } } { { \rm pr}}\{m { \rm\ is \sent}\}\sum_{i\in \mathcal{m } , i\neq m } { { \rm pr}}\{\hat{m}_1=i|m { \rm\ is \sent}\ } e[\mathbf{1}_{\{g^*(i)= g^*(m)\}}]&.\end{aligned}\ ] ] for each pair , we can write & = & { { \rm pr}}\{g^*(i)= g^*(m)\}\\ \nonumber & = & \sum_{k=1}^{e^{nr_{\scriptscriptstyle fb } } } { { \rm pr}}\{g^*(i)=k|g^*(m)=k\ } { { \rm pr}}\{g^*(m)=k\}\\ \label{eq : prgigm } & = & \sum_{k=1}^{e^{nr_{\scriptscriptstyle fb } } } { { \rm pr}}\{i\in \mathcal{v}^*(k)|m\in \mathcal{v}^*(k)\ } { { \rm pr}}\{m\in \mathcal{v}^*(k)\}.\end{aligned}\ ] ] since is uniformly randomly chosen from all equal partitionings of , we can write for and for any substituting the above equality in ( ) we get =\frac{e^{n(r - r_{\scriptscriptstyle fb})}-1}{e^{nr}-1}.\end{aligned}\ ] ] we can now combine ( ) and ( ) and conclude &= & \frac{e^{n(r - r_{\scriptscriptstyle fb})}-1}{e^{nr}-1}\sum_{m=1}^{e^{nr } } { { \rm pr}}\{m { \rm\ is \sent}\}\sum_{i\in \mathcal{m } , i\neq m } { { \rm pr}}\{\hat{m}_1=i|m { \rm\ is \sent}\ } \\\nonumber & = & e^{-n(r_{\scriptscriptstyle fb}+o(1 ) ) } { { \rm pr}}\{{\rm decoding\ error\ in\ first \ block } \}\\ \nonumber & \leq & e^{-n(r_{\scriptscriptstyle fb}+o(1 ) ) } p_e(n , r , p)\\ \nonumber & \leq & e^{-n(r_{\scriptscriptstyle fb}+e_{\scriptscriptstyle nofb}(r)+o(1))}.\end{aligned}\ ] ] the above inequality implies that the expected ( with respect to encoder selection ) probability of error mis - detection event is less than the right hand side of ( ) .therefore , we can conclude that there exists at least one feedback encoding function among the ones from which we randomly selected that satisfies ( ) .this completes the proof . for each , there exists such that .let s fix and consider the integer which satisfies we divide the whole transmission block into sub - blocks each with length .we then partition each sub - block into three parts of lengths , and exactly the same as the partitioning in the scheme proposed in section iii . in the first portion of sub - block , message which contains nats of new informationis transmitted on the forward channel using a non - feedback gaussian codebook similar to the first phase of the algorithm described in section iii .after the transmission , this message is decoded and the decoded message is transmitted back on the feedback channel during the first portion of the sub - block and with the rate nats per channel use . by the end of the feedback transmission ( end of the first portion of sub - block ) , the transmitter can detect the decoding error .if , the failure alarm is sent in the second portion of the sub - block and the message is retransmitted with high power in the third portion of the block . in fact , for each sub - block we apply the 3-phase iterative scheme of section iii with the distinction that the error detection and retransmission for each sub - block occurs one sub - block after the original transmission .the forward rate per channel use in each sub - block is defining , the rate per channel use will be less than . using the results of section iii , we can conclude that there exists such that the error probability for the message is upper bounded by where the last inequality is a consequence of ( ) . using the union bound , the total error probability will be bounded as follows where the last inequality is again a consequence of ( ) . taking , the above inequality completes the proof .j. p. m. schalkwijk and t. kailath , `` a coding scheme for additive noise channels with feedback part i : no bandwidth constraint , '' _ ieee trans .info . theory _ ,it-12 , pp .172 - 182 , 1966 .r. g. gallager and b. nakibolu , `` variations on a theme by schalkwijk and kailath , '' arxiv:0812.2709v2 , 16 august 2009 .kim , a. lapidoth , and t. weissman , `` the gaussian channel with noisy feedback '' in _ proc . of the international symposium on information theory ( isit-07 )_ , ( nice , france ) , pp . 1416 - 1420 , june 2007 .young - han kim , amos lapidoth and tsachy weissman , `` error exponents for the gaussian channel with active noisy feedback , '' _ ieee transactions on information theory _1223 - 1236 , march 2011 .james m. ooi , `` a framework for low - complexity communication over channels with feedback , '' ph.d .dissertation , mit , 1997 .nuno c. martins and tsachy weissman , `` coding for additive white noise channels with feedback corrupted by quantization or bounded noise , '' _ ieee transactions on information theory _ ,volume 54 , issue 9 , sept . 2008 page(s ) : 4274 - 4282 m. agarwal , d. guo and m. honig , `` error exponent for gaussian channels with partial sequential feedback '' , _ ieee international symposium on information theory ( isit ) _ , nice , france , june 2007 . s. c. draper , k. ramchadran , b. rimoldi , a. sahai , and d. n. c. tse , `` attaining maximal reliability with minimal feedback via joint channel - code and hash - function design , '' in _ proc .allerton conf .communication , control and computing _ , monticello , il , sep .
|
we investigate the achievable error probability in communication over an awgn discrete time memoryless channel with noiseless delay - less rate - limited feedback . for the case where the feedback rate is lower than the data rate transmitted over the forward channel , we show that the decay of the probability of error is at most exponential in blocklength , and obtain an upper bound for increase in the error exponent due to feedback . furthermore , we show that the use of feedback in this case results in an error exponent that is at least higher than the error exponent in the absence of feedback . for the case where the feedback rate exceeds the forward rate ( ) , we propose a simple iterative scheme that achieves a probability of error that decays doubly exponentially with the codeword blocklength . more generally , for some positive integer , we show that a order exponential error decay is achievable if . we prove that the above results hold whether the feedback constraint is expressed in terms of the average feedback rate or per channel use feedback rate . our results show that the error exponent as a function of has a strong discontinuity at , where it jumps from a finite value to infinity .
|
by deploying tens to hundreds of antennas at the base station ( bs ) and simultaneously serving multiple users in the same time - frequency resource block , massive multiple - input - multiple - output ( mimo ) achieves unprecedented gain in both spectral efficiency and radiated energy efficiency , accommodating the stringent requirements of future 5 g systems - .the performance gains , however , come at the expense of a linear increase in hardware cost as well as circuitry power consumption , and therefore massive mimo will be more attractive if low - cost , energy - efficient solutions are available . basically , if each bs antenna is configured with an unabridged radio frequency ( rf ) chain , then the only way to alleviate hardware cost and circuitry power consumption is to use economical low - power components when building the rf chains .these components , however , generally have to tolerate severe impairments , such as quantization noise , nonlinearity of power amplifier , phase noise of oscillator , and i / q imbalance . by modeling the aggregate effect of the impairments ( including quantization noise ) as an additional gaussian noise independent of the desired signal , the authors of investigated the impact of hardware impairments on the system spectral efficiency and radiated energy efficiency , and concluded that massive mimo exhibits some degree of resilience against hardware impairments .further , employing a similar model the authors of derived a scaling law that reveals the tradeoff among hardware cost , circuitry power consumption , and the level of impairments .although the adopted stochastic impairment models are not rigorous theoretically ( for example , the quantization noise inherently depends on the desired signal ) , the analytical results in - closely match those obtained by a more accurate hardware - specific deterministic model , as demonstrated by . among all the components in a rf chain , high - resolution adc ( typically with a bit - width exceeding 10 ) is particularly power - hungry , especially for wideband systems , since the power consumption of an adc scales roughly exponentially with the bit - width and linearly with the baseband bandwidth . lowering the bit - width of the adopted adc will therefore bring in considerable savings on cost and energy .this fact actually has motivated extensive research on low - cost , energy - efficient design of wireless communication systems through employing low - resolution or even one - bit adcs to build the rf chain ; see , e.g. , for additive white gaussian noise ( awgn ) channels , for ultra - wideband channels , and - for mimo channels .regarding massive mimo , the impact of coarse quantization has been investigated only recently . in , the authors evaluated the achievable rates of an uplink one - bit massive mimo system adopting qpsk constellation , least - squares ( ls ) channel estimation , and maximum ratio combiner ( mrc ) or zero - forcing combiner ( zfc ) .the authors of - further revealed that enhancement of achievable rates can be attained by high - order modulation such as 16-qam .the underlying reason is that , even for one - bit massive mimo , the amplitude of the transmit signal can still be recovered provided that the number of bs antennas is sufficiently large and that the signal - to - noise ratio ( snr ) is not too high .optimizations of pilot length and adc bit - width were performed in and respectively , both adopting mrc at the receiver .recently , the authors of analyzed the achievable rates of one - bit massive mimo in frequency - selective channels , employing linear minimum mean squared error ( mmse ) channel estimator and linear combiners such as mrc and zfc .beyond that , various channel estimation and data detection algorithms have been proposed for massive mimo under coarse quantization .for example , near maximum likelihood ( nml ) detector and channel estimator were proposed in for one - bit massive mimo in frequency - flat fading channels . in ,channel estimation and data detection algorithms were developed for quantized massive mimo in frequency - selective fading channels .particularly , tradeoffs between error rate performance and computational complexity were investigated therein based on mismatched quantization models .techniques based on message passing algorithm ( and its variants ) were also applied to quantized massive mimo systems , such as - for frequency - flat fading channels and - for frequency - selective fading channels . in general , - conclude that massive mimo is somewhat robust to coarse quantization , validating the potential of building massive mimo by low - resolution adcs . except , all the aforementioned works have assumed a homogeneous - adc architecture ; that is , all the antennas at the bs are equipped with low - resolution adcs of the same bit - width .although such an architecture seems feasible in terms of achievable rate or bit error rate ( ber ) , it has several practical issues , including data rate loss in the high snr regime - , error floor for linear multi - user detection with 1 - 3 bit quantized outputs - , overhead and challenge of channel estimation - , and of time - frequency synchronization from quantized outputs . from this perspective, high - resolution adcs can still be useful for effective design of massive mimo receivers .motivated by such consideration , in early works - we have proposed a mixed - adc architecture for massive mimo , where a small proportion of the high - resolution adcs are reserved while the others are replaced by one - bit adcs . for frequency - flat channels, shows that the mixed - adc architecture is able to achieve an attractive tradeoff between spectral efficiency and energy efficiency .moreover , compared with the homogeneous - adc architecture , the mixed - adc architecture is inherently immune to most of the aforementioned concerns .for example , channel estimation and time - frequency synchronization in the mixed - adc architecture are more tractable than those in the homogeneous - adc architecture , benefiting from the reserved high - resolution adcs .it is perhaps also worth noting that the mixed - adc architecture is much more flexible to the time - varying property of the users demand for mobile data traffic . to be specific ,when the users sum rate requirement is low , part of the bs antennas can be deactivated .then high - resolution adcs may be adopted in the channel training phase while one - bit adcs may be employed in the data transmission phase .compared with the homogeneous - adc architecture , the mixed - adc architecture in this situation incurs much lower channel estimation overhead and will therefore achieve higher energy efficiency . in this paper , we leverage the information - theoretical tool of generalized mutual information ( gmi ) to quantify the achievable rates of the mixed - adc architecture in frequency - selective channels .the main contributions of this paper are summarized as follows : * we modify the mixed - adc architecture to make it suitable for frequency - selective channels , adopting ofdm to handle inter - symbol interference ( isi ) and a linear frequency - domain equalizer to mitigate inter - carrier interference ( ici ) . * for static simo channels , we derive an explicit expression of the gmi , and based on which further optimize the linear frequency - domain equalizer .the analytical results are then extended to ergodic time - varying simo channels , where tight lower and upper bounds of the gmi are derived .the impact of frequency diversity and imperfect csi on the system performance is investigated as well .* we then extend the analytical framework to the multi - user scenario .ber performance is also examined for a practical convolutional codec .* we develop a reduced - complexity algorithm , by which the computational complexity of the linear frequency - domain equalizer is reduced from to , where is the number of bs antennas and is the number of subcarriers .extensive numerical studies under various setups reveal that , with only a small proportion of high - resolution adcs , the mixed - adc architecture attains a large portion of the achievable rate of ideal conventional architecture , and significantly outperforms antenna selection with the same number of high - resolution adcs .in addition , the mixed - adc architecture in the multi - user scenario remarkably lowers the error floor encountered by one - bit massive mimo .these observations validate the merits of the mixed - adc architecture for effective design of massive mimo receivers . throughout this paper , vectors and matricesare given in bold typeface , e.g. , and , respectively , while scalars are given in regular typeface , e.g. , .we let , and denote the conjugate , transpose and conjugate transpose of , respectively .the -th element of vector is symbolized as , and in the meantime , the -th element of matrix is symbolized as .notation denotes a diagonal matrix , with the diagonal elements numerated in the bracket .for a positive integer , we use to represent the set of positive integers no larger than , i.e. , . for a positive real number , we use to denote the minimum integer that satisfies .notation stands for the distribution of a circularly symmetric complex gaussian random vector with mean vector and covariance matrix .subscripts and are used to indicate the real and imaginary parts of a complex number , respectively , e.g. , , with being the imaginary unit .we further use ] , and accordingly, its circulant matrix form by . note that can be decomposed as , with the diagonal matrix given by the channel outputs at the -th bs antenna over channel uses ( with cyclic prefix removed ) can be collectively written as where collects the independent and identically distributed ( i.i.d . ) complex gaussian noise . to fulfill signal processing in the digital domain, needs to be quantized by a pair of adcs , one for each of the real and imaginary parts . for the mixed - adc architecture , there are only pairs of high - resolution adcs available at the bs and all the other pairs of adcs are with only one - bit resolution .thus the quantized output can be expressed as here , , , and .particularly , means that is quantized by a pair of high - resolution adcs , whereas indicates that is quantized by a pair of one - bit adcs .we further define an adc switch vector ^t ] , and it should be optimized according to and to maximize the user s achievable rate . for analytical convenience , we let the decoder adopt a generalized nearest - neighbor decoding rule ; that is , upon observing \}_{l=1}^{l} ] denotes the codeword for message in the frequency domain , and is the codeword length measured in ofdm symbol .we restrict the codebook to be drawn from a gaussian ensemble ; that is , each codeword is a sequence of i.i.d . random vectors , and all the codewords are mutually independent .such a choice of the codebook ensemble satisfies an average power constraint of , and therefore we define snr as , hereafter letting for concision .for the mixed - adc architecture , since there is no closed - form expression of the channel capacity , in this paper , we leverage gmi to evaluate its achievable rates .the gmi is a lower bound of the channel capacity , and more precisely , it characterizes the maximum achievable rate of specific i.i.d . random codebook ensemble ( gaussian ensemble here ) and specific decoding rule ( generalized nearest - neighbor decoding here ) such that the average decoding error probability ( averaged over the codebook ensemble ) is guaranteed to vanish asymptotically as the codeword length grows without bound . as a performance metric, it has proven convenient and useful in several important scenarios , such as fading channel with imperfect csi at the receiver and channels with transceiver distortion - , - .we exploit the theoretical framework in and to derive the gmi of the mixed - adc architecture .following essentially the same steps as ( * ? ? ?c ) , we obtain an explicit expression of the gmi as follows .[ prop : prop_3 ] assuming gaussian codebook ensemble and generalized nearest - neighbor decoding , the gmi for given and is where the performance indicator follows from |^2 } { q\mathcal{e}_s\mathbb{e}[\hat{\tilde{\mathbf{x}}}^{\dag}\hat{\tilde{\mathbf{x}}}]}. \label{equ : equ_17}\ ] ] the corresponding optimal scaling parameter is given by .\ ] ] we note that here the expectation operation ] , of which the -th segment is given by in the meantime , is a block matrix in which each block is a -dimensional diagonal matrix defined as here ] is taken with respect to and . for the readers better understanding , here we outline a sketch of the proof . for a complete version of this proof , please refer to appendix - a .first , to maximize we need to derive the closed - form expressions of ] . through tedious manipulations ,we obtain =\mathbf{w}^{\dag}\mathbf{g},\ \text{and}\ \mathbb{e}[\hat{\tilde{\mathbf{x}}}^{\dag}\hat{\tilde{\mathbf{x}}}]=\mathbf{w}^{\dag}\mathbf{d}\mathbf{w}.\ ] ] then , in yields a closed - form expression as we notice that it is actually a generalized rayleigh quotient of , and as a result , we conveniently obtain the optimal linear frequency - domain equalizer as given by . in the previous subsection, we derived the optimal linear frequency - domain equalizer .thus we are now ready to examine the performance of the proposed mixed - adc architecture . particularly in this subsection, we focus on two special case studies . for the special case of ,the next corollary gives a comparison between the gmi and the channel capacity .[ cor : cor_1 ] when , yields a simplified expression as on the other hand , the channel capacity ( achieved by mrc over the antennas at each subcarrier ) is given by the proof is given in appendix - b . since is a convex function of positive real number , holds even when .we note that this rate loss is due to the identical choice of the scaling parameter over all the subcarriers in the decoding metric , and that benefiting from the channel hardening effect of massive mimo , such performance loss is expected to be marginal , as will be verified by the numerical study in section [ sect : numerical ] .the next corollary draws some connection between the frequency - flat channel scenario addressed in - and the frequency - selective channel scenario investigated in this paper .[ cor : cor_2 ] when , the analytical results in proposition 2 reduce to those for frequency - flat channels obtained in - .see appendix - c for its proof .the impact of frequency diversity on the system performance will be revealed by numerical studies in section [ sect : numerical ] . in the following ,we first examine the asymptotic behavior of in the low snr regime , and then for the special case of , explore the limit of in the high snr regime .[ cor : cor_3 ] as , for a given adc switch vector we have on the other hand , the channel capacity for approaches the proof is given in appendix - d .comparing and , we can make two observations .first , due to one - bit quantization , part of the achievable data rate is degraded by a factor .second , to achieve the maximum data rate , high - resolution adcs should be switched to the antennas with the maximum or , equivalently . for a general , in the high snr regime is still too complicated to yield any simplification . in light of this, we derive the limit of in the high snr regime for the special case of .the subsequent corollary gives our result .[ cor : cor_4 ] for the special case of , in the high snr regime is given by the following limit where ^t ] , and replace with ] and further its circulant matrix form by .then , can be decomposed as , where the eigenvalue matrix is given by note that we allow to be different for different users since they may be located in various environments .besides , it is perhaps worth noting that typically , to keep the overhead of cyclic prefix relatively low .we make the ideal assumption that all the users are perfectly synchronized .then , the received ofdm symbol at the -th bs antenna , with user considered , is where summarizes the co - channel interference and gaussian noise experienced by user . noticing that is actually identical for any ,here the superscript is simply for distinction with the single - user scenario .the snr is defined as , and without loss of generality , hereafter we let .then , is quantized by either a pair of high - resolution adcs or a pair of one - bit adcs , and the quantized output is we assume that there are still pairs of high - resolution adcs available at the bs .the quantized output is then transformed to the frequency domain and later processed by user - specific frequency - domain equalization , leading to at the decoder , a generalized nearest - neighbour decoding rule is adopted to decode the -th user s transmit signal .that is , upon observing \}_{l=1}^{l^u} ] is the corresponding codeword for message , and is the codeword length of user .note that in can be different for different users . in the multi - user scenario ,if we take the co - channel interference ( before quantization ) as an additional colored gaussian noise , then there is no essential difference with the single - user scenario , and the resulting equalizer can effectively handle this noise term by exploiting its statistical characteristics .the subsequent proposition summarizes our main results .[ prop : prop_4 ] for given and channel realizations , , the gmi of user is given by the performance indicator and the optimal scaling parameter are and are achieved by the optimal linear frequency - domain equalizer in the above , ^t\in\mathbb{c}^{nq\times 1} ] is the correlation matrix of and , and is also identical for all the users .as aforementioned , there is no fundamental difference between the single - user and the multi - user scenarios .therefore , we only outline the sketch of the proof . for user , the gmi also follows from , except that we need to replace by |^2 } { q\mathcal{e}_s\mathbb{e}[(\hat{\tilde{\mathbf{x}}}^{u})^{\dag}\hat{\tilde{\mathbf{x}}}^u]}.\ ] ] the calculation of ] follows essentially the same line as that in the single - user scenario , except that we replace in with then following the same technical route as appendix - a , we formulate as a generalized rayleigh quotient of , and based on which obtain the optimal linear frequency - domain equalizer as well as the corresponding . in the multi - user scenario , there is no generally convincing adc switch scheme .moreover , the high - dimensional property of the problem calls for efficient adc switch scheme , and time - consuming schemes such as exhaustive search or greedy algorithm are practically infeasible .therefore , we consider the following heuristic schemes .* random adc switch : high - resolution adcs are switched to randomly chosen antennas . * norm - based adc switch : switch the high - resolution adcs to the antennas with the maximum or , equivalently .the norm - based adc switch scheme is asymptotically optimal in terms of sum rate in the low snr regime , and achieves slightly better performance than the random adc switch scheme . as the snr or the number of users increases , the performance gain tends to decrease .numerical results will be presented in section [ sect : numerical ] to examine the performance of the norm - based adc switch scheme . meanwhile ,numerical study for the random adc switch scheme is omitted due to space limitation .analogously we extend the analytical results to ergodic time - varying channels .round - robin channel training is performed across the users and the bs antennas .similarly we assume i.i.d .rayleigh fading , i.e. , , for any , , and .the bs also adopts an mmse estimator , and we decompose into where is the estimated channel vector and is the independent error vector . under the rayleigh fading assumption , it is easy to verify that both and are complex gaussian distributed .moreover , if we define the normalized mse as , then and , for any , , and .the lower and upper bounds of the gmi are given by the following proposition , and numerical study will be conducted in section [ sect : numerical ] to verify their tightness . for ergodic time - varying channels with estimated csi , lower and upper bounds of the gmi of user are given by }{1\!-\!\mathbb{e}_{\hat{\mathbf{h}}}[\delta(\mathbf{w}_{\mathrm{opt}}^{u,\mathrm{im}},\bm{\delta})]}\right),\\ i_{\mathrm{gmi}}^{u,\mathrm{u}}&\!\!\!=\!\!\!&\rho\mathbb{e}_{\hat{\mathbf{h}}}\left[\log\left(1\!+\!\frac{\delta(\mathbf{w}_{\mathrm{opt}}^{u,\mathrm{im}},\bm{\delta})}{1\!-\!\delta(\mathbf{w}_{\mathrm{opt}}^{u,\mathrm{im}},\bm{\delta})}\right)\right].\end{aligned}\ ] ] in the above , accounts for the overhead of channel training .moreover , and also come from and , but we need to replace with ] . here , and denote , , , and , , respectively . in practice, the fast fading can be approximated to be piecewise - constant over successive subcarriers since typically , to keep the overhead of cyclic prefix relatively low . within each piecewise - constant frequency interval ,only one pilot symbol is required for each user .therefore we can reduce the channel training overhead by letting different users pilot symbols occupy non - overlapped subcarriers . for more details, please refer to ( * ? ? ?ii - c ) . in this manner, the channel training overhead turns out to be and accordingly . , , , and , the proposed channel training method consumes four ofdm symbol intervals . as a comparison, the channel training in the ideal conventional architecture only needs one ofdm symbol interval and the channel training in lasts for ofdm symbol intervals . ]since the computational complexity of matrix inversion grows cubically with the dimension of the matrix , directly inverting the matrix may impose great computational burden on the receiver . in the following ,we exploit the specific structure of , and propose a reduced - complexity algorithm to efficiently compute . with the proposed algorithm , the computational complexity of can be reduced from to , significantly alleviating the computational burden of the receiver .+ [ prop : complexity ] the block matrix consists of blocks , of which each block is a -dimensional diagonal matrix . exploiting this structure , the evaluation of can be simplified by applying , where is a specific permutation matrix that makes a block diagonal matrix .has exactly one element of 1 in each row and each column and 0s elsewhere , the computational complexity of is virtually negligible . ]observing the structure of , we notice that can be transformed into a block diagonal matrix by some row - permutation matrix and some column - permutation matrix .moreover , it is straightforward that , due to the symmetry of .figure [ fig : permutation ] gives an example for , where the zero elements of are left blank and the nonzero entries belonging to the same subcarrier are marked by the same color .since the permutation matrix solely depends on the system parameter , it can be saved offline thus incuring no additional computational burden on the receiver .moreover , the permutation matrix can be found by a simple training program . to this end , we create a training matrix that shares the same structure as ] , and is a -dimensional diagonal matrix for any .particularly , the value of each nonzero element of indicates its column index after performing the column permutation . exploiting these inherent marks we are able to transform into by no more than times of pair - wise column permutations and obtain during the training process . before inverting , evaluating also incurs a computational complexity of . beyond that, the computational burden of and is negligible .therefore , with proposition [ prop : complexity ] , the computational complexity of can be reduced from to , significantly alleviating the computational burden of the receiver .now we corroborate the analytical results through extensive numerical studies .all the figures in this section are for ergodic time - varying channels , and the channel coefficients are drawn from i.i.d .rayleigh fading , i.e. , , for any , , and .the norm - based adc switch scheme is adopted in both single - user and multi - user scenarios .`` ca '' and `` as '' represent ideal conventional architecture and antenna selection , respectively .+ + assuming perfect csi at the bs , figure [ fig : fig_2 ] displays the gmi of the mixed - adc architecture for different numbers of high - resolution adc pairs .several observations are in order .first we notice that the gmi lower and upper bounds are very tight , and thus we will only use the gmi lower bound in the subsequent evaluation . in addition , for the special case of , there is a barely visible gap between the gmi and the capacity , as predicted by corollary [ cor : cor_1 ] .moreover , the mixed - adc architecture with a small proportion of high - resolution adcs does achieve a dominant portion of the capacity of ideal conventional architecture , and significantly outperforms antenna selection with the same number of high - resolution adcs .+ + + + the impact of frequency diversity on the system performance is addressed by figure [ fig : fig_3 ] , where three different choices of are made . for each given snr , we notice that there is an intersection between two of the curves .particularly , if lies at the right side of the intersection , a larger would lead to a lower gmi .this may be attributed to the limitation of the linear frequency - domain equalizer in mitigating ici .if lies at the left side of the intersection , on the other hand , a larger would achieve a higher gmi .because in this situation , there are few high - resolution adcs and , hence , frequency diversity becomes crucial for signal recovery at the receiver . by letting and varying the snr , figure [ fig : fig_4 ] gives a closer look at the impact of frequency diversity .the dashed lines correspond to the limits of the gmi in the high snr regime , as given by corollary [ cor : cor_4 ] .first we notice that , for each given , the gmi will increase first and then turn down as the snr grows large .such a phenomenon has been observed in frequency - flat one - bit massive simo systems , e.g. , - and . as explained in the aforementioned works , for one - bit massive simo, the amplitude information of the transmit signal tends to be totally lost as the snr approaches infinity ; see also .that is , in this situation a moderate amount of noise is actually beneficial for signal recovery at the receiver .further we notice that , over the entire snr regime , a larger will always achieve a higher gmi . because for one - bit massive simo , frequency diversity is a key factor that enables signal recovery at the receiver .figure [ fig : fig_5 ] examines the impact of imperfect csi on the system performance , assuming and .ghz , with the ofdm symbol interval being 71.4 and the users speed being km/h . as a comparison , the authors of assume a coherence interval as long as 1142 symbols . ]we observe that the performance gap between the mixed - adc architecture and the ideal conventional architecture is slightly enlarged , mainly due to the increase of the channel training overhead .nevertheless , the conclusions we made for the perfect csi case still hold here .several observations are in order from figure [ fig : fig_7 ] .first , we notice that the gmi lower and upper bounds again virtually coincide with each other .second , due to not applying successive interference cancellation ( sic ) at the receiver , there is a visible but marginal performance loss at when compared with the per - user capacity of the ideal conventional architecture .finally , the mixed - adc architecture with a small proportion of high - resolution adcs still achieves a large portion of the per - user capacity of the ideal conventional architecture , while overwhelms antenna selection with the same number of high - resolution adcs .such observations also hold for the imperfect csi case , as verified by figure [ fig : fig_8 ] .+ + we further examine the ber performance of the mixed - adc architecture in the multi - user scenario . to this end, we assume that each user adopts an independent convolutional coder , where the code rate is , the constraint length is , and the generator polynomials are and .16-qam modulation with gray mapping is adopted to map the coded bits into system input .is with respect to gaussian distributed channel inputs , it is mismatched with 16-qam modulation and is therefore suboptimal in this situation .nevertheless , benefiting from the central - limit theorem , the actual channel inputs , , may be approximately viewed as gaussian as grows large . as a result , the performance loss due to this mismatch is expected to be marginal . ] in this manner , two information bits are first encoded into four codeword bits and then mapped into a 16-qam symbol . as a result , under this setup equals .hard - decision viterbi decoding is performed at the bs , also in a per - user manner .numerical result is presented in figure [ fig : fig_9 ] , where is the number of -bit adc pairs . the quantization bins and output levels of -bit adcare given by ( * ? ? ?we notice that one - bit massive mimo suffers from error floor , as already revealed in - .the mixed - adc architecture , on the other hand , remarkably improves the ber performance .performance loss due to replacing high - resolution adcs by 5-bit adcs is also examined , still using the equalizer derived in proposition [ prop : prop_4 ] .such a mismatched equalizer entails relatively low computational complexity , and incurs marginal ber loss as verified by figure [ fig : fig_9 ] .these observations again validate the merits of the mixed - adc architecture .we note that , beyond the mixed - adc architecture specialized in this paper , the gmi analytical framework established is also applicable to any other adc configuration . for any other kind of adc configuration , calculation ofthe gmi still follows from the general idea of proposition [ prop : prop_4 ] , and only and will change along with the adc configuration .moreover , always has a closed - form expression . has a closed - form expression if the bs adopts one - bit or high - resolution adcs , otherwise we have to rely on numerical integrations to accurately evaluate .figure [ fig : fig_10 ] displays the per - user gmi under different adc configurations , assuming perfect csi at the bs .unlike figure [ fig : fig_9 ] , here each equalizer is matched with the corresponding adc configuration .it is clear that one - bit massive mimo generally has to tolerate large rate losses for target spectral efficiency ( tse ) above 2 bits / s / hz .four - bit massive mimo , on the other hand , only incurs marginal rate losses for tse below 7 bits / s / hz . comparison between the homogeneous - adc architecture and the mixed - adc architecture is also conducted , taking and , as an example .note that hardware costs of these two configurations are close .figure [ fig : fig_10 ] reveals that these two configurations achieve nearly the same performance for tse below 6 bits / s / hz and the mixed - adc architecture performs better for tse above 6 bits / s / hz . a comprehensive comparison between the homogeneous - adc architecture and the mixed - adc architecture is left for future work due to space limitation .in this paper , we developed an analytical framework for the mixed - adc architecture operating over frequency - selective channels .notably , the analytical framework is also applicable to any other kind of adc configuration .extensive numerical studies demonstrate that the mixed - adc architecture is able to achieve performance close to the ideal conventional architecture , and thus we envision it as a promising option for effective design of massive mimo receivers . beyond the scope of this paper , several important problems need further investigation .first , for a given tse , optimization of the bit - width and ratio of each kind of adc adopted will further reduce the hardware cost and energy consumption of the mixed - adc architecture .second , more efficient channel estimation algorithm and more effective adc switch scheme will further improve the performance of the mixed - adc architecture , especially in the multi - user scenario .third , another line of work advocates the homogeneous - adc architecture for energy - efficient design of massive mimo , and therefore a reasonable and comprehensive comparison between the mixed - adc architecture and the homogeneous - adc architecture is particularly important , especially when taking practical issues such as time / frequency synchronization and channel estimation into account .[ lem : lem_1 ] for bivariate circularly - symmetric complex gaussian vector we have =\sqrt{\frac{2}{\pi}}\frac{\sigma_{12}^{\dag}}{\sigma_1 } , \label{equ : lemma_1}\ ] ] and =\frac{2}{\pi}[\arcsin(\theta_{\mathrm{r}})+j\arcsin(\theta_{\mathrm{i } } ) ] , \label{equ : lemma_2}\ ] ] where and are the real and imaginary parts of the correlation coefficient respectively , and is defined as ] , with some manipulation we have \nonumber\\ & \!\!\!=\!\!\!&\sum_{n\in\mathbb{n}}\mathbb{e}[\mathbf{r}_n^{\dag}\mathbf{f}^{\dag}\mathbf{w}_n^{\dag}\tilde{\mathbf{x}}]\nonumber\\ & \!\!\!=\!\!\!&\sum_{n\in\mathbb{n}_{\mathrm{1}}}\!\!\mathbb{e}[\mathbf{y}_n^{\dag}\mathbf{f}^{\dag}\mathbf{w}_n^{\dag}\tilde{\mathbf{x } } ] \!+\!\sum_{n\in\mathbb{n}_{\mathrm{0}}}\!\!\mathbb{e}[\mathrm{sgn}^{\dag}(\mathbf{y}_n)\mathbf{f}^{\dag}\mathbf{w}_n^{\dag}\mathbf{f}\mathbf{x } ] , \label{equ : equ_1}\end{aligned}\ ] ] where the first term is contributed by the antennas connected with high - resolution adcs , and the second term comes from the antennas connected with one - bit adcs . in the following ,we need to evaluate them separately .first let us look at ] . to start with ,we define ^t ] , given as = \sum_{n\in\mathbb{n}_{\mathrm{1}}}\mathcal{e}_{\mathrm{s}}\mathbf{w}_n^{\dag}\bm{\lambda}_n^ * + \sum_{n\in\mathbb{n}_{\mathrm{0}}}\sqrt{\frac{2}{\pi}}\frac{\mathcal{e}_{\mathrm{s}}\mathbf{w}_n^{\dag}\bm{\lambda}_n^*}{\sqrt{1+\mathcal{e}_{\mathrm{s}}\|\bm{\lambda}_n\|^2/q}}. \label{equ : equ_4}\ ] ] for the convenience of further investigation , we define ^t ] is given by in order to evaluate ]. then we have & = & \sum_{m=1}^n\sum_{n=1}^n\mathbb{e}[\mathbf{r}_m^{\dag}\mathbf{f}^{\dag}\mathbf{w}_m^{\dag}\mathbf{w}_n\mathbf{f}\mathbf{r}_n]\nonumber\\ & = & \sum_{m=1}^n\sum_{n=1}^n\mathrm{tr}\left(\mathbf{w}_m^{\dag}\mathbf{w}_n\mathbf{f}\mathbb{e}[\mathbf{r}_n\mathbf{r}_m^{\dag}]\mathbf{f}^{\dag}\right)\nonumber\\ & = & \sum_{m=1}^n\sum_{n=1}^n\mathrm{tr}\left(\mathbf{w}_m^{\dag}\mathbf{w}_n\mathbf{f}\mathbf{r}_{nm}\mathbf{f}^{\dag}\right).\end{aligned}\ ] ] noticing that are all diagonal matrices , to get rid of the trace operation , we may define a diagonal matrix as and based on which rewrite ] , and it is easy to verify that then , we introduce a series of matrices , , of which corresponds to the correlation coefficient matrix between and , with its -th element given by due to the mixed nature of , the computation of for different may follow different routes , and therefore in the following , we need to evaluate them case by case. case 2 : . in this case , and . exploiting we get \nonumber\\ & \!\!\!=\!\!\!&\frac{2}{\pi}[\arcsin((\bm{\theta}_{nm})_{pq,\mathrm{r}})+j\arcsin((\bm{\theta}_{nm})_{pq,\mathrm{i}})].\nonumber\\ \label{equ : equ_11}\end{aligned}\ ] ] in summary , we enumerate for different kinds of in the above . combining them with, we are able to obtain the matrix and further evaluate according to .now we conclude the proof . when , we have , where is defined as ^t ] , we have in this situation given as then , exploiting woodbury formula , we get its inversion as follows we notice that is in fact a diagonal matrix ; that is further , it is easy to verify that satisfies ^t.\ ] ] then with all the above results and some further manipulations , we arrive at and now , it is straightforward to verify .when , we simply use to denote the channel coefficient corresponding to the -th bs antenna .in this situation , the circulant matrix reduces to a scaled identity matrix and the diagonal matrix turns out to be . as a result , in becomes \mathbf{1}_q.\ ] ] if we let collect the coefficients before , i.e. , ,\ ] ] then we have where denotes right kronecker product . as for the matrix , with patient examinationwe find out that each of its blocks , , is also a scaled identity matrix , for any .then letting collect the scaling factors before , we have in which is given as ( with proof omitted ) +\\ \bar{\delta}_n\bar{\delta}_m\!\cdot\!\frac{2}{\pi}\bigg [ \mathrm{arcsin}\big(\frac{(h_n^*h_m)_{\mathrm{r}}\mathcal{e}_\mathrm{s } } { \sqrt{|h_n|^2\mathcal{e}_\mathrm{s}+1}\sqrt{|h_m|^2\mathcal{e}_\mathrm{s}+1}}\big)+\\ \ \ \ \ \ \ \ \ \ \ j\mathrm{arcsin}\big(\frac{(h_n^*h_m)_{\mathrm{i}}\mathcal{e}_\mathrm{s } } { \sqrt{|h_n|^2\mathcal{e}_\mathrm{s}+1}\sqrt{|h_m|^2\mathcal{e}_\mathrm{s}+1}}\big ) \bigg ] , & \text{if}\ n\neq m. \end{cases}\ ] ] comparing and with ( * ? ? ?( 13 ) ( 14 ) ) , we notice that they are virtually the same except for some little differences due to the different scaling parameters of .we proceed by evaluating in this situation ; that is then we immediately find out that it is the same as that we obtained for frequency - flat simo channels in ( * ? ? ?3 ) , and thus conclude the proof . to simplify the invertible block matrix , we need to examine each of its blocks .first let us look at an arbitrary nondiagonal block , i.e. , with . from , we observe that , for any , and on the other hand , . as a result , for always approaches a zero matrix no matter which case it falls into , and consequently tends to be a zero matrix as well , since the unitary transformation does not change the frobenius norm of a matrix . in summary , for the diagonal blocks , if , we have , and from it is obvious that . in other word , we have , for any . if , on the other hand , from and we obtain . then , applying it is straightforward to verify that .again , we have , for any . in summary , with all the above results , we have where ( a ) is obtained by applying the algebraic limit theorem since the limits of and exist , and ( b ) comes from the fact that the inverse of a nonsingular matrix is a continuous function of the elements of the matrix , i.e. , . noting that , as , we immediately have .when and as grows without bound , we have for expositional concision , we denote the normalization of by , and accordingly define ^t$ ] .then in this situation approaches which is independent of . on the other hand ,as tends to infinity we have which is independent of as well .since when and as , is given by the combination of , and , we conclude that exists for any .if we define , then also exists and is independent of . as a result , we have where ( a ) is obtained by applying the algebraic limit theorem since both and exist , and ( b ) comes from the fact that the inverse of a nonsingular matrix is a continuous function of the elements of the matrix , i.e. , .e. bjrnson , j. hoydis , m. kountouris , and m. debbah , massive mimo systems with non - ideal hardware : energy efficiency , estimation , and capacity limits , " _ ieee trans .inf . theory _ ,7112 - 7139 , 2014 .e. bjrnson , m. matthaiou , and m. debbah , massive mimo with non - ideal arbitrary arrays : hardware scaling laws and circuit - aware design , " _ ieee trans .wireless commun .14 , no . 8 , pp . 4353 - 4368 , 2015 .u. gustavsson , c. sanchz - perez , t. eriksson , f. athley , g. durisi , p. landin , k. hausmair , c. fager , and l. svensson , on the impact of hardware impairments on massive mimo , " in _ proc .ieee global commun .( globecom ) workshops _ , 2014 .l. fan , d. qiao , s. jin , c. -k .wen , and m. matthaiou , `` optimal pilot length for uplink massive mimo systems with low - resolution adc , '' in _ proc .ieee sensor array and multichannel signal processing workshop ( sam ) _ , 2016 .d. verenzuela , e. bjrnson , and m. matthaiou , `` hardware design and optimal adc resolution for uplink massive mimo systems , '' in _ proc .ieee sensor array and multichannel signal processing workshop ( sam ) _ , 2016 .j. choi , j. mo , and r. heath . near maximum - likelihood detector and channel estimator for uplink multiuser massive mimo systems with one - bit adcs , " _ ieee trans .2005 - 2018 , 2016 .wen , c .- j .wang , s. jin , k .- k .wong , and p. ting , bayes - optimal joint channel - and - data estimation for massive mimo with low - precision adcs , " _ ieee trans .signal processing _2541 - 2556 , 2016 .s. sezginer and p. bianchi , asymptotically efficient reduced complexity frequency offset and channel estimators for uplink mimo - ofdma systems , " _ ieee trans .signal processing _ , vol .964 - 979 , 2008 .m. vehkaper , t. riihonen , m. girnyk , e. bjrnson , m. debbah , l. k. rasmussen , and r. wichman , asymptotic analysis of su - mimo channels with transmitter noise and mismatched joint decoding , " _ ieee trans63 , no . 3 , 749 - 765 , 2015
|
the aim of this paper is to investigate the recently developed mixed - adc architecture for frequency - selective channels . multi - carrier techniques such as orthogonal frequency division multiplexing ( ofdm ) are employed to handle inter - symbol interference ( isi ) . a frequency - domain equalizer is designed for mitigating the inter - carrier interference ( ici ) introduced by the nonlinearity of one - bit quantization . for static single - input - multiple - output ( simo ) channels , a closed - form expression of the generalized mutual information ( gmi ) is derived , and based on which the linear frequency - domain equalizer is optimized . the analysis is then extended to ergodic time - varying simo channels with estimated channel state information ( csi ) , where numerically tight lower and upper bounds of the gmi are derived . the analytical framework is naturally applicable to the multi - user scenario , for both static and time - varying channels . extensive numerical studies reveal that the mixed - adc architecture with a small proportion of high - resolution adcs does achieve a dominant portion of the achievable rate of ideal conventional architecture , and that it remarkably improves the performance as compared with one - bit massive mimo . analog - to - digital converter ( adc ) , frequency - selective fading , generalized mutual information , inter - carrier interference , linear frequency - domain equalization , massive multiple - input - multiple - output ( mimo ) , mixed - adc architecture , orthogonal frequency division multiplexing ( ofdm ) .
|
a _ tiling substitution rule _ is a rule that can be used to construct infinite tilings of using a finite number of tile types .the rule tells us how to substitute " each tile type by a finite configuration of tiles in a way that can be repeated , growing ever larger pieces of tiling at each stage . in the limit, an infinite tiling of is obtained . in this paperwe take the perspective that there are two major classes of tiling substitution rules : those based on a linear expansion map and those relying instead upon a sort of concatenation " of tiles . the first class , which we call _ geometric tiling substitutions _, includes _ self - similar tilings _ ,of which there are several well - known examples including the penrose tilings . in this class a tile is substituted by a configuration of tiles that is a linear expansion of itself , and this geometric rigidity has permitted quite a bit of research to be done .we will note some of the fundamental results , directing the reader to appropriate references for more detail . the second class , which we call _ combinatorial tiling substitutions _, is sufficiently new that it lacks even an agreed - upon definition . in this classthe substitution rule replaces a tile by some configuration of tiles that may not bear any geometric resemblance to the original .the difficulty with such a rule comes when one wishes to iterate it : we need to be sure that the substitution can be applied repeatedly so that all the tiles fit together without gaps or overlaps .the examples we provide are much less well - known ( in some cases new ) and are ripe for further study .the two classes are related in a subtle and interesting way that is not yet well understood .the study of aperiodic tilings in general , and substitution tilings specifically , comes from the confluence of several discoveries and lines of research .interest in the subject from a philosophical viewpoint came to the forefront when wang asked about the decidability of the tiling problem " : whether a given set of prototiles can form an infinite tiling of the plane .he tied this answer to the existence of aperiodic prototile sets " : finite sets of tiles that can tile the plane , but only nonperiodically .he saw that the problem is undecidable if an aperiodic prototile set exists .berger was the first to find an aperiodic prototile set and was followed by many others , including penrose .it turned out that one way prove a prototile set is aperiodic involves showing that every tiling formed by the prototile set is self - similar .independently , work was proceeding on one - dimensional symbolic substitution systems , a combination of dynamical systems and theoretical computer science .symbolic dynamical systems had become of interest due to their utility in coding more complex dynamical systems , and great progress was being made in our understanding of these systems .queffelec summarized what was known about the ergodic and spectral theory of substitution systems , while a more recent survey of the state of the art appears in .substitution tilings can be seen as a natural extension of this branch of dynamical systems ; insight and proof techniques can often be borrowed for use in the tiling situation .we will use symbolic substitutions motivate our study in the next section . from the world of physics ,a major breakthrough was made in 1984 by schechtman et . with the discovery of a metal alloy that , by rights , should have crystalline since its x - ray spectrum was diffractive . however , the diffraction pattern had five - fold rotational symmetry , which is not allowed for ideal crystals !this type of matter has been termed quasicrystalline " , and self - similar tilings like the penrose tiling , having the right combination of aperiodicity and long - range order , were immediately recognized as valid mathematical models .dynamical systems entered the picture , and it was realized that the spectrum of a tiling dynamical system is closely related to the diffraction spectrum of the solid it models .thus we find several points of departure for the study of substitution tilings and their dynamical systems .let be a finite set called an _ alphabet _ , whose elements are called _letters_. then , the set of all finite _ words _ with elements from , forms a semigroup under concatenation .a _ symbolic substitution _ is any map .a symbolic substitution can be applied to words in by concatenating the substitutions of the individual letters .a block of the form will be called a _ level- block of type . [ const1d ] let and let and . beginning with the letter we get where we ve added spaces to emphasize the breaks between substituted blocks .notice that the block lengths triple when substituted .[ nonconst1d ] again let ; this time let and .if we begin with we get : note that in this example block lengths are 1 , 2 , 3 , 5 , 8 , 13 , ... , and the reader can verify that they will continue growing as fibonacci numbers .these examples illustrate the major distinction we make between substitutions . in the first example, the length of a substituted letter is always 3 and thus the size of any level- block must be ; this is a _ substitution of constant length_. in the second example the size of a substituted letter depends on the letter itself , and the size of a level- block is computed recursively ; this is a _ substitution of non - constant length_. this is the essence of the distinction between geometric and combinatorial tiling substitutions .it is interesting to consider infinite sequences of the form in .such a sequence is said to be _ admitted _ by the substitution if every finite block of letters is contained in some level- block . in the theory of dynamical systems , the space of all sequences admitted by the substitution is studied using the shift action ( basically , moving the decimal point one unit to the right ) .an interested reader should see to find out more .the most straightforward generalization to tilings of ( or ) is to use unit square tiles labeled ( colored ) by the alphabet .these tilings can be considered as sequences in , and substitutions can take letters to square or rectangular blocks of letters .we only need to ensure that all of the blocks fit " to form a sequence without gaps or overlaps .the constant length case is to expand each colored tile by some integer and then subdivide into ( or ) colored unit squares .a simple method for the non - constant length case is to take the direct product of one - dimensional substitutions of non - constant length .[ const2d ] let , where we represent as a white unit square tile and as a blue unit square tile .suppose the length expansion is 3 and that the tiles are substituted by a three - by - three array of tiles , colored as in figure [ square1 ] .starting with the blue level-0 tile , level-0 , level-1 , level-2 and level-3 tiles are shown in figure [ square2 ] .one sees in this example the tiling version of the rule creating the sierpinski carpet .[ nonconst2d ] this time , let the alphabet be ; for simplicity of notation we put .the direct product of the fibonacci substitution of example [ nonconst1d ] with itself is shown in figure [ dp1 ] . using only colors without the numbers we show the level-0 through level-4 blocks of type in figure [ dp2 ] .the characteristic plaid " appearance of the direct product is evident .some literature on -dimensional symbolic substitutions exists . in the non - constant length case ,direct product substitutions , with a generalization allowing randomness in the choice of substitution from level to level , are studied in .an extension of this idea , allowing substitutions with restrictions forcing the substitutions to fit " , are studied in . in the constant - length case, a partial survey and spectral analysis of this class from the dynamical systems viewpoint appears in . for those wishing to experiment with various substitutions of both constant and non - constant length, the author maintains a matlab freeware computer program that allows the user to generate these tilings of and manipulate them in several ways .let us introduce some terminology that will be useful throughout the paper .tile _ is a set that is the closure of its interior .we will always assume that tiles are bounded ; in the literature it is frequently assumed that tiles are connected or even homeomorphic to topological balls .in fact it is often required that the tiles be polygonal , but in substitution tiling theory tiles with fractal boundary occur naturally .when it is desirable to distinguish between congruent tiles they can be _ labeled _ ( also called _ marked _ or _ colored _ ) .two tiles are considered _ equivalent _ if they differ by a rigid motion and carry the same label .prototile set _ is a finite set of inequivalent tiles .given a prototile set , a _ tiling _ of is a set of tiles , each equivalent to a tile from , such that 1. covers : , and 2 . packs : distinct tiles have non - intersecting interiors .a _ -patch _ is a finite union of tiles with nonintersecting interiors covering a connected set ; two patches are equivalent if there is a rigid motion between them that matches up equivalent tiles .a tiling is said to be of _ finite local complexity ( flc ) _( also known as having a _ finite number of local patterns _ ) if there are only finitely many two - tile -patches up to equivalence .a tiling is called _ repetitive _ ( also called _ almost periodic _ or the _ local isomorphism property _ ) if for any -patch there is an such that in every ball of radius there is a patch equivalent to . in dynamical systemstheory the most work has been done on repetitive tilings with finite local complexity . given a tiling substitution ,it is possible to construct infinite tilings and tiling spaces from that substitution in a few different ways .( this is also true for symbolic substitutions ). our description will be necessarily imprecise as different substitutions can require different definitions of some of the terms ; we give the main ideas here and refer the reader to sources such as , and to get more details . one way to getan infinite tiling is to begin with some initial block or tile and substitute ad infinitum . in many cases a limiting sequence or tiling exist .sometimes it will cover only a half - line , quarter - plane , or some other unbounded region of space , and sometimes it will cover the entire line or plane .a less constructive method is to define a tiling as _ admitted _ by the substitution if every finite configuration of tiles in is equivalent to a configuration found inside a level- tile , for some .the _ tiling space _ associated to a substitution is the set of all tilings admitted by that substitution .another way to obtain this space is to take the closure ( in a suitable metric ) of all rigid motions of a limiting tiling . in either case , a point in the tiling space is an infinite tiling , and any nontrivial rigid motion of that tiling is considered a different point in the tiling space .substitutions of constant length have a natural generalization to tilings in higher dimensions , which we introduce in section [ geom.section ] .these generalizations , which include the well - studied self - similar tilings , rely upon the use of linear expansion maps and are therefore rigidly geometric .we present examples in varying degrees of generality and include a selection of the major results in the field .extending substitutions of non - constant length to higher dimensions seems to be more difficult , and is the topic of section [ comb.section ] . to even definewhat this class contains has been problematic and there is not yet a consensus on the subject . for lack of existing terminologywe have decided to call this type of substitution combinatorial as tiles are combined to create the substitutions without any geometric restriction save that they can be iterated without gaps or overlaps , and because in certain cases it is possible to define them in terms of their graph - theoretic structure . inmany cases one can transform combinatorial tiling substitutions into geometric ones through a limit process . in section [ connections.section ] , we will discuss how to do this and what the effects are to the extent that they are known .we conclude the paper by discussing several of the different ways substitution tilings can be studied , and what sorts of questions are of interest .although the idea had been around for several years , self - similar tilings of the plane were given a formal definition and introduced to the wider public by thurston in a series of four ams colloquium lectures , with lecture notes appearing thereafter . throughout the literature one finds varying degrees of generality and some commonly used restrictions .we make an effort to give precise definitions here , adding remarks which point out some of the differences in usage and in terminology .for the moment we assume that the only rigid motions allowed for equivalence of tiles are translations ; this follows and .we give the definitions as they appear in , which includes that of as a special case .let be a linear transformation , diagonalizable over , that is expanding in the sense that all of its eigenvalues are greater than one in modulus .a tiling is called _ -subdividing _ if 1 . for each tile , is a union of -tiles , and 2 . and are equivalent tiles if and only if and form equivalent patches of tiles in . a tiling will be called _ self - affine with expansion map _ if it is -subdividing , repetitive , and has finite local complexity .if is a similarity the tiling will be called _ self - similar_. for self - similar tilings of or there is an _ expansion constant _ for which .the rule taking to the union of tiles in is called an _ inflate - and - subdivide rule _ because it inflates using the expanding map and then decomposes the image into the union of tiles on the original scale .if is -subdividing , then it will be invariant under this rule , therefore we show the inflate - and - subdivide rule rather than the tiling itself .the rule given in figure [ square1 ] is an inflate - and - subdivide rule with .however , the rule given in figure [ dp1 ] is not an inflate - and - subdivide rule . [ chair ] the l - triomino " or chair " substitution uses four prototiles , each being an l formed by three unit squares .we have chosen to color the prototiles since they are inequivalent up to translation .the expansion map is and in figure [ chair1 ] we show the substitution of the four prototiles .this geometric substitution can be iterated simply by repeated application of followed by the appropriate subdivision .parallel to the symbolic case , we call a tile that has been inflated and subdivided times a _level- tile_. in figure [ chair2 ] we show level- tiles for and .one of the earliest results was a characterization of the expansion constant of a self - similar tiling of .( thurston , kenyon ) a complex number is the expansion constant for some self - similar tiling if and only if is an algebraic integer which is strictly larger than all its galois conjugates other than its complex conjugate .the forward direction was proved by thurston and the reverse direction by kenyon . in ,kenyon extends the result to self - affine tilings of in terms of eigenvalues of the expansion map . in the study of substitutions , from one- dimensional symbolic substitutions to very general tiling substitutions , the _ substitution matrix _ is an indispensable tool .( this matrix has also been called the transition " , composition " , subdivision " , or even abelianization " matrix ) .suppose that the prototile set ( or alphabet ) has elements labeled by .the _ substitution matrix _ is the matrix with entries given by for example , the substitution in example [ const2d ] has substitution matrix when we label and .if an initial configuration of tiles has white tiles and blue tiles , then ^t ] .this matrix has eigenvalues given by : the golden mean and its conjugate .let and denote the associated left eigenvectors .there are constants and so that , which gives us the vector of level- tile lengths : the lengths of the intervals for our self - similar tiling are the entries of .the length of the type- tile is and the length of the type- tile is .these lengths form an eigenvector for , so there exists an inflate - and - subdivide rule , which we have shown in figure [ fib.subs ] .* notes : * ( 1 ) this process works on any substitution on letters provided that the vector lies in the span of the left eigenvectors of the substitution matrix of the substitution .it works trivially on constant length substitutions since , the vector of unit tile lengths , already forms a perron eigenvector for the substitution matrix .\(2 ) in the fibonacci example , since is a pisot number ( its conjugate is smaller than one in modulus ) , the higher the inflation the less important the second term of equation [ fib.length.eqn ] becomes .thus the lengths of the level- tiles of the inflate - and - subdivide rule are asymptotically close to the lengths of the non - constant length substitution , and therefore approximately integers !the situation is dramatically different , of course , if any of the secondary eigenvalues are strictly greater than one in modulus ( the strongly non - pisot case ) .the reader should not be too surprised to discover that this process will work for direct product substitutions , such as example [ nonconst2d ] , and their variations , such as examples [ fib.dpv ] and [ np.dpv ] .the level- blocks are rectangular and have side lengths given by the lengths of the one - dimensional substitution .rescaling by the expansion factor gives us rectangular tiles whose side lengths are determined by the perron eigenvector as before .thus , if the original one - dimensional substitutions have a proper inflate - and - subdivide rule , so will any dpvs associated with them . still , it is instructive to consider the two - dimensional replace - and - rescale method as it applies in this simple case .[ fib.sst ] consider the fibonacci dpv substitution of example [ fib.dpv ] .there are four tile types and the substitution matrix is . ] . denoting the support of the level- tile of type as , we can find the support of the prototile for the inflate - and - subdivide rule that corresponds to by setting in figure [ dpv3 ] we compare level- tiles from the dpv ( left ) and the self - similar tiling ( right ) ..,title="fig:",width=192 ] .,title="fig:",width=192 ] [ rauzy.sst ] the self - similar tiling associated with the rauzy two - dimensional substitution of example [ rauzy.cst ] has as its volume expansion the largest root of the polynomial .the three tile types obtained by the replace - and - rescale method are shown in figure [ rauzy4 ] , compared with a large iteration of the substituton .the replace - and - rescale method can produce intriguing results , especially if the substitution is not constructive or not primitive .we look at the former case in examples [ np.sst ] and [ trinpsst ] and discover that the associated geometric substitution tilings may lose local finiteness . in example[ chacon ] we consider the latter case to see how a lack of primitivity can impact the geometric substitution ; in this case an attempt to fix " the situation yields new tiling substitutions that fail to have the expected relationship to one another .[ np.sst ] applying the replace - and - rescale method to the substitution in example [ np.dpv ] produces the inflate - and - subdivide rule of figure [ non - pisot2 ] . .] it is proved in that any tiling admitted by this inflate - and - subdivide rule does not have finite local complexity since there are two tiles that meet in infinitely many different ways .( examination of the tiling on the right of figure [ non - pisot3 ] may convince the reader that this is plausible ) .this lack of local finiteness means that the dynamical results found in , most notably that the system should be weakly mixing , can not be directly applied . , with a loss of flc on the right.,title="fig:",height=288 ] , with a loss of flc on the right.,title="fig:",height=288 ] the loss of finite local complexity happens along arbitrarily long line segments composed of tile edges ( proved for this example in but was found as a necessary condition in general in ) .as you travel along such a segment , a discrepancy in the number of short tile edges from one side of the line to the other appears ; on longer segments this discrepancy increases as more and more short edges pile up on one side than the other .because the tile edge lengths are not rationally related , this means that we must keep seeing new adjacencies as the discrepancy grows . in the limitone will see infinite _ fault lines _ along which tiles may slide across one another with arbitrary offsets .the growth of these discrepancies is made possible by the fact that the expansion constant s algebraic conjugate is greater than one in modulus ( i.e. it is strongly non - pisot ) .this is also responsible for the fact that original dpv has adjacencies that are ripped apart when substituted , as shown in the left of figure [ non - pisot3 ] .[ trinpsst ] as in the previous example , the substitution of example [ nptriangles ] gives rise to a self - similar tiling that does not have finite local complexity . in the previous example, fault lines could occur both horizontally and vertically . in this example, fault lines can occur horizontally , vertically , and diagonally , as one can see from the right side of figure [ non - pisottriangles2 ] .the author has not seen examples allowing fault lines in more than three distinct directions .[ chacon ] a famous one - dimensional dynamical system is given by the chacon cut - and - stack construction , which provided the first example of a weakly but not strongly mixing system ( see , p. 133for a synopsis of the results in the one - dimensional case ) .the cut - and - stack system can be recoded by the symbolic substitution , and figure [ chaconsq ] shows a dpv substitution based on this .another tiling version of the construction , shown in figure [ chacon1 ] , is analyzed from a dynamical systems viewpoint in and put in the context of combinatorially substitutive tilings in .the four prototiles used in those works are not square , but are a rescaling of the supports of the level-1 tiles in figure [ chaconsq ] .this substitution is not primitive since no matter how many times we substitute the three small tiles , they will never contain the large square .because of this we can not obtain a meaningful self - similar tiling directly using the replace - and - rescale method : the replacements of all but the large square will have volumes that go to zero under rescaling , leaving us with only the first tile and a trivial substitution .there is a way to recode the system into a primitive one , producing the prototiles shown to the right of the arrows in figure [ chacon3 ] . by referring to figure [ chacon2 ], the reader may be convinced that knowing the surroundings of a particular tile is enough to decide unambiguously with which new prototile to replace it .the nonprimitive chacon substitution turns into the primitive one of figure [ chacon4 ] .because there is a locally defined map taking tilings admitted by the nonprimitive substitution to tilings admitted by the primitive substitution , and vice versa , the tiling spaces are considered _ mutually locally derivable_. this means that the dynamical systems are equivalent in the sense of topological conjugacy " , thus important dynamical features are preserved .one such dynamical feature , proved in , is that the dynamical system under the action of is weakly mixing .the larger eigenvalue of the substitution matrix of the chacon primitive substitution is 9 , and it is not difficult to see that the length expansion is governed by powers of 3 .the replace - and - rescale method produces a prototile set of five congruent squares ; the inflate - and - subdivide rule is shown in figure [ chacon5 ] .what makes this example curious is that the dynamical systems of the combinatorially substitutive tiling and its associated self - similar tiling are distinctly different .one can use results from to show that under the action , the self - similar tiling dynamical system is not weakly mixing .an embedded action would therefore also fail to be weakly mixing .this stands in contrast to the weakly mixing action proved in when the substitution is only combinatorial .one can see that the systems are misaligned " by considering figure [ chacon6 ] and comparing the location of the red circle in each substitution , which for and represents the central level- tile within its level- tile .one can check that as grows without bound so does the distance between the red dots .that is , if any corners of the level- tiles are lined up , the red dots will move further and further away from one another !substitution tilings are being studied from topological , dynamical , physical , combinatorial , and other perspectives , often in conjunction with one another . in this sectionwe will briefly outline areas of current interest and possible questions for future study .tilings make good models for the atomic structure of crystals and quasicrystals , and perhaps the most exciting work on them is being done at the intersection of physics and topology .methods for investigating certain tiling spaces via -algebras have been developed and are nicely summarized in .the types of tilings that are most easily evaluated this way are self - similar tilings and tilings generated by the projection method .( some tilings , such as the penrose and the octagonal " tilings , fall into both categories ) .the k - theory of these -algebras are of interest to both mathematicians and physicists .the possible energy levels of electrons in a material modeled by a tiling determine gaps in the spectrum of the associated schrdinger operator .the k - theory gives a natural labeling of the spectral gaps , thus providing theoretically relevant physical information ( see for a detailed discussion of this branch of study ) .it is believed that there may be additional physical interpretations for k - theory and other topological invariants of tiling spaces .there is more promising topological work being done as well .for instance , it has been shown in varying degrees of generality ( see and references therein ) that flc tiling spaces are inverse limits . successful efforts to compute the homology and cohomology of tiling spaces , and to connect these results to k - theory , have been plentiful .a nice summary of the current state of the art , along with the discovery of torsion and its ramifications , appears in with a primary emphasis on canonical " projection tilings .an informal discussion of the connections between some physical and mathematical problems appears in , with a focus on recent progress in the cohomology of tiling spaces .included is a summary of the work in involving cohomological analysis of the deformations of tiling spaces .an important question is the extent to which the homology and cohomology of tiling spaces has physical interpretations .almost all of the existing literature on the topology of tiling spaces makes the assumption of local finiteness .this is , after all , an appropriate restriction , given that the model of atomic structure requires tiles ( atoms ) to fit together in only a finite number of ways .however , examples exist of geometric tiling substitutions that result in non - flc tilings , for example danzer s triangular tilings and the tiling from , which is easily generalized using the methods of section [ connections.section ] .in , kenyon was the first to consider the conditions under which a tiling of with finitely many tile types can lose local finiteness : the tile boundaries must contain circles or arbitrarily long line segments , thus substitution tilings without finite local complexity have fault lines along which tiles can slide past one another . in , the cohomology of a highly restricted class of non - flc substitution tilings was successfully computed , and it was shown that each fault line leaves a sort of signature in the cohomology in dimension 3 , even though the tilings are two - dimensional .it is a topic of current interest to understand the topology of tiling spaces without finite local complexity . at the intersection of mathematical physics and dynamical systemsis the connection between the diffraction spectra of quasicrystalline solids and the dynamical spectra of the tilings that model them .the fact that these spectra are related at all is first established in , while the mathematical description of the diffraction spectral measure is given sound theoretical footing in .much of the work to date has centered around discrete point sets called delone sets , which can be thought of as locations of molecules and which can be converted into tilings in a few different ways .ever since schectman et . discovered quasicrystals in a laboratory experiment , people have been trying to figure out which delone sets are diffractive " in that their spectra exhibit sharp bright spots .mathematically it is interesting to ask when the spectra consists only of sharp bright spots , i.e. when it has pure point spectrum " .more precisely , one defines a _ spectral measure _ which can be broken into pure point , singular continuous , and absolutely continuous pieces with respect to lebesgue measure .great progress has been made for model sets " ( obtained by a generalized projection method ) , and for delone sets generated by substitutions ; a current synopsis of the state of the art appears in .it is now known that for certain locally finite delone sets the notions of pure point dynamical and pure point diffraction spectra coincide .this was generalized in in a measure - theoretic setting which allows for a lack of local finiteness .the question of whether certain substitution systems consist of model sets can be investigated by looking for modular coincidences " ; has an algorithm and many examples , which build upon the work in .questions remain regarding the connections when there is any continuous portion of the spectral measures .the dynamical spectra of specific geometric tiling substitutions have been studied ( and others ) but are not completely understood .related to the study of tilings and model sets is a question in dynamical systems theory . for one - dimensional symbolic substitutions , it is sometimes possible to find a geometric realization " of the substitution .a formal definition appears in , p. 140 , but the idea is that a geometric realization is a geometric dynamical system , ( such as an irrational rotation of the circle ) , which encodes the system via partition elements .for example , the fibonacci substitution sequences can be seen to code , in an almost one - to - one fashion , addition by on a one - dimensional torus , where is the golden mean ( see , p. 199 for details ) .orbits in this geometric realization look like " one - dimensional tilings .several more examples are given in , p. 231 .if a tiling dynamical system arises from a model set , then it can be seen as a geometric realization .for example , it is shown in that the penrose rhombic tiling dynamical system is an almost one - to - one extension of an irrational rotation on a 4-dimensional torus . in general, we do not know when substitution tilings have geometric realizations .this paper provides the full extent , to the author s knowledge , of known classes of combinatorial tiling substitutions .all of the examples in section [ comb.section ] are obtained by various means from one - dimensional symbolic substitutions .what other mechanisms exist for generating combinatorial substitutions ?is there a method for obtaining non - geometric substitutions from geometric ones ?it seems clear that there should be a multitude of other examples waiting to be discovered , and finding them is of paramount importance .combinatorial tiling substitutions have hardly begun to be studied from the dynamical systems viewpoint . in analogy with the self - similar case and many of its generalizations, we would like to investigate basic ergodic - theoretic properties such as repetitivity , unique ergodicity , and recognizability .this program was carried out in on a restricted class of two - dimensional symbolic substitutions of non - constant length for which standard " techniques could be applied .unfortunately , these techniques do not necessarily work in the non - geometric case .the crucial missing piece is that the substitution rule can not always be seen as an action from the tiling space to itself : the substitution can be applied only to level- tiles , not to entire tilings .many combinatorial tiling substitutions do not extend to maps of the tiling space in a canonical way , and it is unclear whether ( or when ) any of them do .new methods will need to be devised to tackle even the most basic questions in the dynamical systems and ergodic theory of combinatorial substitution tilings .a closely related concept , essential to many standard arguments , is whether the substitution map can be locally undone " so that one can detect the level-1 tile in a given region without requiring infinite information about the tiling .this is called _ recognizablility _ in the sequence case and the _ unique composition property _ in the self - similar tilings case .when the substitution acts as a continuous map on the tiling space , unique composition is equivalent to the substitution map being invertible . in the event of non - periodicity , recognizability and unique compositionwere proved in and , respectively .although the substitution map may not make sense on tiling spaces in the non - geometric case , the notion of unique composition still does , and a natural conjecture is that combinatorial substitutions possess it whenever they are non - periodic . in one dimension , there is great interest in the theory of combinatorics on words " ( see part i of for an extensive exposition ) . in this theory ,one considers finite blocks of letters and investigates how often they appear , and in what combinations , within sequences .substitution sequences are particularly fertile for this type of study .the _ complexity _ of a sequence is a function telling how many words of length exist in the sequence ; this can be used to compute the topological entropy , p. 4 . any non - periodic sequence with minimal complexityis called _ sturmian _ , the classic example being the sequence given by the fibonacci substitution .one can read about the numerous consequences of being sturmian in chapter 6 of .the notion of complexity can be generalized to higher dimensions and some results exist in this direction ( see and references therein ) .combinatorial substitutions such as the rauzy substitution of example [ rauzy.cst ] are a natural place to look for examples of low - complexity sequences .some problems that have been at least partially resolved for geometric substitutions are still open for combinatorial ones .for instance it is completely unclear whether there should exist matching rules which force tiles to fit together as prescribed by combinatorial substitutions .would it be possible to use the matching rules for their associated self - similar tilings , which we know exist by , to find them ?another question is , since the connection to the atomic structure of solids is an important motivation for the study of tiling spaces , can we identify the diffraction or dynamical spectrum of combinatorial substitution ?it is known that the dynamical spectrum of the chacon substitution is trivial since it is weakly mixing , and following it is reasonable to conjecture that dpv substitutions without pisot expansions might also be weakly mixing .for combinatorial substitutions , is the spectrum largely dependent on the perron eigenvalue of the substitution matrix , as it is in the geometric case ? or is the situation like the one - dimensional symbolic case , which is also highly sensitive to the combinatorics of the substitutions ? the first open question is , when does a combinatoral tiling substitution give rise to a reasonable geometric one ?we have already seen that the non - primitive chacon substitution of example [ chacon ] does not. there must be substitutions for which the limit in the replace - and - rescale method does not exist , or produces topologically unpleasant tiles .in fact , it is unclear exactly how the replace - and - rescale method ought to properly be applied : determining the appropriate linear expansion map is problematic for at least two reasons .first , the combinatorial substitution might be encoding an inflate - and - subdivide rule that does not inflate as a similarity .this means that knowing the volume expansion would not tell us the appropriate length expansions .second , if the linear map is a similarity , there may be some rotation inherent in the combinatorial substitution that would need to be expressed in the linear map , as in the rauzy substitution of example [ rauzy.cst ] .in the best circumstances we could hope to find conditions under which the expansion can be found , the limiting tiles are topologically nice " , and a proper inflate - and - subdivide rule exists .it is interesting to consider the relationship between the dual graphs of a combinatorial substitution and its associated geometric substitution ( if it exists ) .it is clear from figure [ dpv3 ] that the dual graphs must always have the same labeled vertices , but the edge and facet sets do not seem to bear a consistent relationship to one another . in the case of example[ fib.sst ] the edge set of the dpv s dual graph is contained in that of its self - similar tiling , but this is not true in general . since the unlabeled dual graphs are not isomorphic , there is no homeomorphism of the plane taking one to another ( see , p. 169 ) .can an understanding of the combinatorial properties of one tiling still give us insight into the other ?in one - dimensional symbolic dynamics , the curtis - lyndon - hedlund theorem ( see ) states that homeomorphisms between symbolic dynamical systems are equivalent to local maps called sliding block codes " .a sliding block code transforms one sequence into another element by element , deciding what to put in the new sequence by looking in a finite window in the old one .similarly , one tiling can be transformed locally into another ; if the process is invertible the tilings are mutually locally derivable .it has come to light that there is no curtis - lyndon - hedlund theorem for tiling dynamical systems .using the basic method of , one can show that example [ fib.dpv ] and the associated self - similar tiling of example [ fib.sst ] have topologically conjugate dynamical systems without the possibility of mutual local derivability .we conjecture that in the pisot case , combinatorial substitutions have topologically conjugate dynamical systems with their geometric counterparts . in general one would not expect the conjugacy to be through mutual local derivability .our final question takes note of the fact that the dynamical relationship between substitution sequences and self - similar tilings of the line is especially subtle . on a sequence space there is a -action ; passing to a tiling by choosing tile lengths provides a natural action by called a suspension " .surprisingly , the continuous action of the tiling space is probably better understood than the discrete action ! for instance , the presence or absence , and nature of , eigenvalues of the tiling dynamical system can be understood in terms of the expansion constant along with certain geometric information .this situation is far more complicated in the symbolic case and the interested reader should see , section 7.3 for a synopsis .also , topological conjugacies between different suspensions have been thoroughly considered in , where it is seen that the eigenvalues of the substitution matrix play a critical role .we can consider tilings such as those in figure [ non - pisot3 ] as being suspensions of the same sequence in , and ask similar questions about their spectra and topological properties .more generally , we can consider tilings such as those in figure [ non - pisottriangles2 ] as being suspensions of the same labeled graph .this perspective yields an interesting set of problems at the intersection of dynamics and combinatorics .j. bellissard , d. hermmann , and m. zarrouati , hull of aperiodic solids and gap labeling theorems , in _ directions in mathematical quasicrystals _ ( m. baake and r. v. moody eds . )crm monograph series , american mathematical society , providence , 2000 .j. kellendonk and i. putnam , tilings , -algebras , and k - theory , in _ directions in mathematical quasicrystals _ ( m. baake and r. v. moody , eds . ) , crm monograph series , american mathematical society , providencee , 2000 .b. praggastis , markov partitions for hyperbolic toral automorphisms , ph .d. thesis , university of washington , 1994 .n. priebe , towards a characterization of self - similar tilings in terms of derived orono tessellations , _ geom .* 79 * ( 2000 ) , no . 3 , 239 - 265 .
|
this paper is intended to provide an introduction to the theory of substitution tilings . for our purposes , tiling substitution rules are divided into two broad classes : geometric and combinatorial . geometric substitution tilings include self - similar tilings such as the well - known penrose tilings ; for this class there is a substantial body of research in the literature . combinatorial substitutions are just beginning to be examined , and some of what we present here is new . we give numerous examples , mention selected major results , discuss connections between the two classes of substitutions , include current research perspectives and questions , and provide an extensive bibliography . although the author attempts to fairly represent the as a whole , the paper is not an exhaustive survey , and she apologizes for any important omissions .
|
let the vector denote a mult distributed random vector , where is the vector of cell probabilities .hence , the nonnegative components of satisfy .we will consider situations where is large with respect to , i.e. in these cases does not estimate accurately .for instance , for the average mean squared error in estimating , we have unless holds , i.e. unless comes close to a unit vector . however , there are characteristics of that can be estimated consistently . herewe will study the _ structural distribution function _ of .it is defined as the empirical distribution function of the , and it is given by },\ x\geq 0.\ ] ] our basic assumption will be that converges weakly to a limit distribution function , i.e. the basic estimation problem is how to estimate ( or ) from an observation of .a rule of thumb in statistics is to replace unknown probabilities by sample fractions .this yields the so called _ natural estimator_. this estimator , denoted by , is equal to the empirical distribution function based on times the cell fractions , so }.\ ] ] this estimator has often been used in linguistics , but turns out to be inconsistent for estimating ; see section [ inconsistency ] , khmaladze ( 1988 ) , and klaassen and mnatsakanov ( 2000 ) .our estimation problem is related to estimation in sparse multinomial tables . for recent results on the estimation of cell probabilities in this contextsee aerts , augustyns and janssen ( 2000 ) .in section [ simulation ] we present a small simulation study of a typical multinomial sample and the behavior of the natural estimator .it turns out that smoothing is required to obtain weakly consistent estimators .an estimator based on grouping and an estimator based on kernel smoothing are presented in section [ smoothing ] .section [ technique ] deals with the technique of poissonization and with the relation between weak and consistency .these basic results are used in the weak consistency proofs in section [ consistency ] .section [ discussion ] contains a discussion .we have simulated a sample with and .the cell probabilities are generated via the distribution function and its density have been chosen equal to the functions in section [ smoothing ] we show that for these cell probabilities , the limit structural distribution function from ( [ weakconv ] ) is equal to the distribution function of .here it is given by these functions are drawn in figure [ fig:1 ] . for this simulated sample we have plotted the cell counts , multiplied by , and the natural estimate in figure [ fig:2 ] .comparison with the real in figure [ fig:1 ] clearly illustrates the inconsistency of the natural estimator .up to now we have only assumed that the structural distribution function converges weakly to a limit distribution function .> from now on we will assume more structure . consider the function }(u),\ u\in \mathbb r.\ ] ] this step function is a density representing the cell probabilities and we shall call it the _ parent density_. the relation between this parent density and the structural distribution function is given by the fact that if is a uniform(0,1 ) random variable then is the distribution function of . note that so is a probability density indeed .we will assume that there exists a limiting parent density on [ 0,1 ] such that , as , consequently we have , almost surely , and hence .the inconsistency of the natural estimator can be lifted by first smoothing the cell counts .we consider two smoothing methods , grouping , which is actually some kind of histogram smoothing , and a method based on kernel smoothing of the counts .let , be integers , all depending on , such that .define the group frequencies as then the vector of grouped counts is again multinomially distributed , where and the grouped cells estimator , introduced in klaassen and mnatsakanov ( 2000 ) , is defined by } , \quad x\geq 0.\ ] ] this estimator may be viewed as a structural distribution function with parent density },\ u\in \mathbb r.\ ] ] this histogram is an estimator of the limiting parent density in ( [ condition ] ) .we will prove weak consistency of the corresponding estimator in section [ groupcons ] . for our simulated examplethe estimates of and resulting from grouping with equal group size are given in figure [ fig:3 ] . now that we have seen that the estimator based on the grouped cells counts is in fact based on a histogram estimate of the parent density we might also use kernel smoothing to estimate and proceed in a similar manner .if we choose a probability density as _ kernel function _ and a _ bandwidth _ , we get the following estimator for the parent density as an estimator for the structural distribution function of the function we take the empirical distribution function of with uniform , namely }.\ ] ] weak consistency of this estimator will be derived in section [ kerncons ] . for our simulated example kernel estimates and of and , respectively , with equal to 50 are given in figure [ fig:4 ] .in our proofs we shall use repeatedly the powerful method of poissonization and a device involving convergence . consider the random vectors and , with where are independent .note given the random vector has a mult distribution .based on an infinite sequence of random vectors one can construct vectors and , the cell counts over and of these vectors repectively , with the distributions ( [ pois ] ) . given are coupled as follows note that this shows that either for all or for all .an important step in the ( in)consistency proofs is to show that `` poissonization is allowed '' , i.e. that we can transfer the limit result for the estimator based on the poissonized sample , the `` poissonized version '' , to the original estimator .the following proposition is used repeatedly , also if no poissonized version is involved .[ prop:1 ] let be a distribution function and let and be possibly random distribution functions . if and hold , then is valid , i.e. for all and all continuity points of in the special case where equals , the proposition states that convergence implies weak convergence .* proof * note that for all and all we have let denote an arbitrary continuity point of and an arbitrary positive number .choose such that and such that and are continuity points of .then hence , we have and , by ( [ ass:1 ] ) , consequently , by ( [ ass:2 ] ) and ( [ int ] ) we get choose such that and .then we see and hence since this holds for arbitrary continuity points and arbitrary we have established , in probability . basic trick in dealing with the difference of the natural estimator and its poissonized version , },\ ] ] uses the coupling as in ( [ condpois ] ) and is given by the following string of inequalities by ( [ nottozero ] ) the right hand side converges to zero in probability and this shows that poissonization is allowed . because of the independence of the poisson counts we can easily bound the variance of the poissonized estimator .we get } \leq { 1\over 4 m } \to 0.\ ] ] we also have }=f_m(x)\ ] ] and + together with ( [ nottozero ] ) this gives two reasons why is probably not a consistent estimator of .then , by ( [ poisallowed ] ) the natural estimator has to be inconsistent too .the inconsistency of the structural distribution function has been established in khmaladze ( 1988 ) , khmaladze and chitashvili ( 1989 ) , klaassen and mnatsakanov ( 2000 ) and van es and kolios ( 2002 ) . in these papersthe situation is considered of a _ large number of rare events _ , i.e. for some constant .the explicit limit in probability of turns out to be a poisson mixture of then . under the additional assumption , for some constant ,weak consistency of the estimator based on grouped cells has been proved , without using poissonization , by klaassen and mnatsakanov ( 2000 ) and by the poissonization method for the simpler case of equal group size , i.e. , by van es and kolios ( 2002 ) .we shall prove the following generalization without using poissonization .[ thm : group ] if , and are valid for some limiting parent density that is continuous on ] we have } - \i_{[{mq_{mj}\over k_j - k_{j-1}}\leq x ] } \big|dx\\ & = & \sum_{j=1}^m { k_j - k_{j-1}\over m } \big|{m\bar x_j\over n(k_j - k_{j-1 } ) } - { mq_{mj}\over k_j - k_{j-1 } } \big| .\nonumber\end{aligned}\ ] ] consequently , we obtain and hence in order to prove in probability , by proposition [ prop:1 ] it remains to show .consider the function }(u).\ ] ] for we have by assumption , the function is uniformly continuous and hence implies , almost surely , and in distribution , i.e. , which completes the proof of the theorem . weak consistency of the kernel type estimator is established by the next theorem .[ thm : kern ] if hold , if is a density that is riemann integrable on bounded intervals , that is also riemann square integrable on bounded intervals , and that has bounded support or is ultimately monotone in its tails , and if holds with continuous on ] , and fixed, we have note that the conditions imposed on guarantee that is arbitrarily small for sufficiently large , that which is arbitrarily close to one for large enough , and hence that as .consequently , in view of ( [ w4 ] ) , and in view of the uniform continuity and boundedness of , all three terms at the right hand side tend to zero as and subsequently .so , , almost surely and in distribution , which implies .the key assumption in the consistency proofs of the grouping and kernel estimators is the existence of a limiting parent density .this is a reasonable assumption only , if there is a natural ordering of the cells and neighboring cells have approximately the same cell probabilities . in applications likee.g. linguistics this need not be the case .consider a text of words of an author with a vocabulary of words . herethe words in the vocabulary correspond to the cells of the multinomial distribution and the existence of a limiting or approximating parent density is rather unrealistic . to a lesser extentthis might be the case in biology , where cells correspond to species and is the number of individuals found in some ecological entity .an estimator that is consistent even if our key assumption does not hold , has been constructed in klaassen and mnatsakanov ( 2000 ) .however , it seems to have a logarithmic rate of convergence only .the rates of convergence of our grouping and kernel estimators will depend on the rate at which the assumed limiting parent density can be estimated .this issue is still to be investigated , but under the assumption , for some constant , van es and kolios ( 2002 ) show that , for the relatively simple case of equal group size , an algebraic rate of convergence can be achieved by the estimator based on grouping .since the estimators studied here are based on smoothing of the cell frequencies an important open problem is the choice of the smoothing parameter . for the estimator based on groupingthis is the choice of the sizes of the groups and for the kernel type estimator the choice of the bandwidth . by studying convergence ratesthese choices may be optimized .khmaladze , e.v . and r.ya .chitashvili ( 1989 ) , statistical analysis of a large number of rare events and related problems ( russian ) , proc .a. razmadze math .georgian acad .sci . , tbilisi , 92 , 196 - 245 .
|
we consider estimation of the structural distribution function of the cell probabilities of a multinomial sample in situations where the number of cells is large . we review the performance of the natural estimator , an estimator based on grouping the cells and a kernel type estimator . inconsistency of the natural estimator and weak consistency of the other two estimators is derived by poissonization and other , new , technical devices . + _ ams classification : _ 62g05 ; secondary 62g20 + _ keywords : _ multinomial distribution , poissonization , kernel smoothing , cell probabilities , parent density +
|
in high - dimensional regression problems where the number of potential model parameters greatly exceeds the number of training samples , the use of an penalty which augments standard objective functions with a term that sums the absolute effect sizes of all parameters in the model has emerged as a hugely successful and intensively studied variable selection technique , particularly for the ordinary least squares ( ols ) problem ( e.g. ) .generalised linear models ( glms ) relax the implicit ols assumption that the response variable is normally distributed and can be applied to , for instance , binomially distributed binary outcome data or poisson distributed count data .however , the most popular and efficient algorithm for -penalised regression in glms uses a quadratic approximation to the log - likelihood function to map the problem back to an ols problem and although it works well in practice , it is not guaranteed to converge to the optimal solution .here it is shown that calculating the maximum likelihood coefficient estimates for -penalised regression in generalised linear models can be done via a coordinate descent algorithm consisting of successive soft - thresholding operations on the _ unpenalised _ maximum log - likelihood function without requiring an intermediate ols approximation . because this algorithm can be expressed entirely in terms of the natural formulation of the glm , it is proposed to call it the _ natural coordinate descent algorithm_. to make these statements precise , let us start by introducing a response variable and predictor vector .it is assumed that has a probability distribution from the exponential family , written in canonical form as where is the natural parameter of the distribution , is a dispersion parameter and , and convex are known functions . the expectation value of is a function of the natural parameter , , and linked to the predictor variables by the assumption of a linear relation , where is the vector of regression coefficients .it is tacitly assumed that such that represents the intercept parameter .suppose now that we have observation pairs ( with fixed for all ) .the minus log - likelihood of the observations for a given set of regression coefficients under the glm is given by where any terms not involving have been omitted , is a convex function , , and the dependence of and on the data has been suppressed for notational simplicity . in the penalised regressionsetting , this cost function is augmented with and penalty terms to achieve regularity and sparsity of the minimum - energy solution , i.e. is replaced by where and are the and norm , respectively , and and are positive constants .the term merely adds a quadratic function to which serves to make its hessian matrix non - singular and it will not need to be treated explicitly in our analysis . furthermore a slight generalisation is made where instead of a fixed parameter , a vector of predictor - specific penalty parameters is used .this allows for instance to account for the usual situation where the intercept coefficient is unpenalised ( ) .the problem we are interested in is thus to find with a function of the form where is a smooth convex function , is an arbitrary vector and , is a vector of non - negative parameters . the notation is used to indicate that for all and likewise the notation will be used to indicate elementwise multiplication , i.e. . the maximum of the _ unpenalised _ log - likelihood , considered as a function of , is of course the legendre transform of the convex function , and the unpenalised regression coefficients satisfy where is the usual gradient operator ( see lemma [ lem : legendre ] in appendix [ sec : proof - theorem - app ] ) .this leads to the following key result : [ thm : main ] the solution of is given by where is the solution of the constrained convex optimisation problem furthermore the sparsity patterns of and are complementary , the proof of this theorem consists of an application of fenchel s duality theorem and is provided in appendix [ sec : proof - theorem - app ] .two special cases of theorem [ thm : main ] merit attention .firstly , in the case of lasso or elastic net penalised linear regression , is a quadratic function of , with a positive definite matrix , such that and . if furthermore is diagonal , with diagonal elements , then eq . reduces to the independent problems with solution and .this is the well - known analytic solution of the lasso with uncorrelated predictors , which forms the basis for numerically solving the case of arbitrary as well .secondly , in the case of penalised covariance matrix estimation , for non - negative definite matrices , and for negative definite ( and otherwise ) . eq . is then exactly the dual problem studied by .it is well - known that a cyclic coordinate descent algorithm for the -penalised optimisation problem in eq . converges .when only one variable is optimised at a time , keeping all others fixed , the equivalent variational problem in eq . reduces to a remarkably simple soft - thresholding mechanism illustrated in figure [ fig : thm-1d ] .more precisely , let be a smooth convex function of a single variable , and .the solution of the one - variable optimisation problem with , can be expressed as follows . if then and hence .otherwise we must have and .hence the solution takes the form of a generalised ` soft - thresholding ' see also figure [ fig : thm-1d ] . in other words , compared to the multivariate problem in theorem [ thm : main ] where there remains ambiguity about the signs , in the one - variable case the sign is uniquely determined by the relative position of and . in one dimension .* a. * the unpenalised cost function is a convex function of ; the maximum - likelihood estimate is its unique minimiser . *b. * the maximum - likelihood estimate is also equal to the slope of the tangent to the legendre transform of at . * c. * every value of the penalty parameter leads to a different cost function ; for sufficiently small , the maximum - likelihood estimate is non - zero while for sufficiently large it is exactly zero .* d. * the penalised problem can also be solved by minimising the _ unpenalised _ legendre transform over the interval $ ] ; for and the absolute minimiser of is not included in this interval such that the constrained minimiser is the boundary value and the the maximum - likelihood estimate equals the slope of the tangent at , while for , the constrained minimiser is always the absolute minimiser which has a tangent with slope zero .note that because is convex , the slope at is always smaller than the slope at ( i.e. ) .similar reasoning applies when . ]numerically solving the unpenalised one - variable problem is usually straightforward .first note that by assumption , is differentiable and therefore it is itself the legendre transform of .hence likewise , and assuming there exists no analytic expression for , can be found as the zero of the function for convex , this is a monotonically increasing function of and conventional one - dimensional root - finding algorithms converge quickly .the -dimensional natural coordinate descent algorithm simply consists of iteratively applying the above procedure to the one - dimensional functions where are the current coefficient estimates , i.e. where and .standard techniques can be used to make the algorithm more efficient by organising the calculations around the set of non - zero coefficients , that is , after every complete cycle through all coordinates , the current set of non - zero coefficients is updated until convergence before another complete cycle is run ( see pseudocode in appendix [ sec : code ] ) .an alternative method for updating in the preceding algorithm is to use a quadratic approximation to around the current estimate of in eq . .this leads to a linear approximation for , i.e. if , then this approximation differs from the standard quadratic approximation by the fact that it still uses the _ exact _ thresholding rule from . to be precise ,given current estimates , the standard approximation updates the coordinate by minimizing the approximate quadratic cost function \beta_j + \mu_j|\beta_j|,\end{aligned}\ ] ] which has the solution where .hence , compared to the exact coordinate update rule , the standard algorithm not only uses a quadratic approximation to the cost function , but also a linear approximation the following result shows that , under certain conditions , the approximate and exact thresholding will return the same result : [ prop : threshold ] let be a smooth convex function of a single variable , and let be the solution of with and . denote and . then the proof of this proposition can be found in appendix [ sec : proof - theorem - app ] .note that in the coordinate descent algorithms the single - coordinate functions change from step to step , that is calculated on the _ current _ instead of the _ new _ solution , and that , in the quadratic approximation algorithm , both the current and new solutions are only approximate minimisers .hence this result only shows that if all these errors are sufficiently small , then both thresholding rules will agree .i implemented the natural coordinate descent algorithm for logistic regression in c with a matlab interface ( source code available from http://tmichoel.github.io/glmnat/ ) . the penalised cost function for in this case is given by where and , are the observations .recall from section [ sec : introduction ] that is regarded as the ( unpenalised ) intercept parameter and therefore a fixed value of one ( ) is added to every observation . as convergence criterion i used , where is a fixed parameter .the difference is calculated at every iteration step when a single coefficient is updated and the maximum is taken over a full iteration after all , resp .all active , coefficients have been updated once . to test the algorithm i used gene expression levels for 17,814 genes in 540 breast cancer samples ( brca dataset ) and 20,531 genes in 266 colon and rectal cancer samples ( coad dataset ) as predictors for estrogen receptor status ( brca ) and early - late tumor stage ( coad ) , respectively ( see appendix [ sec : cancer - genome - atlas ] for details , processed data available from http://tmichoel.github.io/glmnat/ ) .i compared the implementation of the natural coordinate descent algorithm against ` glmnet ` ( version dated 30 aug 2013 ) , a fortran - based implementation for matlab of the coordinate descent algorithm for penalised regression in generalised linear models proposed by , which was found to be the most efficient in a comparison to various other softwares by the original authors as well as in an independent study .all analyses were run on a laptop with 2.7 ghz processor and 8 gb ram using matlab v8.2.0.701 ( r2013b ) .following , i considered a geometric path of regularisation parameters where , and , corresponding to the default choice in ` glmnet ` .note that is the smallest penalty that yields a solution where only the intercept parameter is non - zero .such a path of parameters evenly spaced on log - scale typically corresponds to models with a linearly increasing number of non - zero coefficients . to compare the output of two different algorithms over the entire regularisation path , i considered the maximum relative score difference where and are the coefficient estimates obtained by the respective algorithms for the penalty parameter . a critical issue when comparing algorithm runtimes is to match convergence threshold settings .figure [ fig : comp]a shows the runtimes of the exact natural coordinate descent algorithm ( using eq . ) and its quadratic approximation ( using eq . ) , and their maximum relative score difference for a range of values of the convergence threshold .the quadratic approximation algorithm is about twice as fast as the exact algorithm and , as expected , both return numerically identical results within the accepted tolerance levels . for subsequent analyses only the quadratic approximation algorithm was used . because ` glmnet ` uses a different convergence criterion than the one used here , i ran the natural coordinate descent algorithm with a range of values for and calculated the maximum relative score difference over the entire regularisation path with respect to the output of ` glmnet ` with default settings. figure [ fig : comp]b shows that there is a dataset - dependent value for where this difference is minimised and that the minimum difference is within the range observed when running ` glmnet ` with randomly permuted order of predictors .these minimising values and were used for the subsequent comparisons .first , i compared the natural coordinate descent algorithms with exact and approximate thresholding rules ( cf .eqs . and ) . for both datasets and all penalty parameter values, no differences were found between the two rules during the entire course of the algorithm , indicating that the error terms discussed after proposition [ prop : threshold ] are indeed sufficiently small in practice .since there is as yet no analytical proof extending proposition [ prop : threshold ] to the algorithmic setting , the exact thresholding rule was used for all subsequent analyses . .the inset shows the maximum relative score difference between both algorithms for the same convergence thresholds . *b. * maximum relative score difference between the natural coordinate descent algorithm and ` glmnet ` vs. convergence threshold parameter for the brca ( blue circles ) and coad ( red squares ) dataset .the horizontal lines indicate the minimum , mean and maximum of the relative score difference over 10 comparisons between the original ` glmnet ` result and ` glmnet ` applied to data with randomly permuted order of predictors . * c , d .* runtime in seconds of the natural coordinate descent algorithm with cold ( blue squares ) and warm ( red circles ) starts on the brca ( * c * ) and coad ( * d * ) dataset vs. index of the penalty parameter vector . * e , f . *runtime in seconds of ` glmnet ` with cold ( blue squares ) and warm ( red circles ) starts on the brca ( * e * ) and coad ( * f * ) dataset vs. index of the penalty parameter vector .see main text for details . ]next , i compared the natural coordinate descent algorithm to ` glmnet ` considering both `` cold '' and `` warm '' starts .for the cold starts , the solution for the penalty parameter value , , was calculated starting from the initial vector . for the warm starts , was calculated along the path of penalty parameters , each time using as the initial vector for the calculation of .this scenario was designed to answer the question : if a solution is sought for some fixed value of , is it best to run the coordinate descent algorithm once starting from the initial vector ( cold start ) , or to run the coordinate descent algorithm multiple times along a regularization path , each time with an initial vector that should be close to the next solution ( warm start ) ? clearly , if a solution is needed for all values of a regularization path it is always better to run through the path once using warm starts at each step . for ` glmnet `, there is a clear advantage to using warm starts and , as also observed by , for smaller values of , it can be faster to compute along a regularisation path down to than to compute the solution at directly ( figure [ fig : comp]e , f ) . in contrast, the natural coordinate descent algorithm is much less sensitive to the use of warm starts ( i.e. to the choice of initial vector for ) and it is considerably faster than ` glmnet ` when calculating solutions at single penalty parameter values ( figure [ fig : comp]c , d ) . to investigate whether this qualitative difference between both algorithms is a general property ,the following process was repeated 1,000 times : a gene was randomly selected from the brca dataset , a binary response variable was defined from the sign of its expression level , and penalised logistic regression with penalty parameter ( cf .was performed using 5,000 randomly selected genes as predictors , using both cold start ( with initial vector and warm start ( along the regularization path ) ; the response gene was constrained to have at least samples of either sign and the predictor genes were constrained to not contain the response gene .this scheme ensured that datasets with sufficient variability in the correlation structure among the predictor variables and between the predictor and the response variables were represented among the test cases .as expected , total runtime correlated well with the size of the model , defined here as the number of predictors with , more strongly so for the natural coordinate descent algorithm ( pearson s , ) than for ` glmnet ` ( , ) .consistent with these high linear correlations the difference in speed ( runtime ) between cold and warm start was inversely proportional to model size ( spearman s ; figure [ fig : speed]a ) .furthermore , cold start outperformed warm start ( ) in all 1,000 datasets .for ` glmnet ` the opposite was true : warm start always outperformed cold start .however the speed difference ( ) did not correlate as strongly with model size ( spearman s ; figure [ fig : speed]b ; note that the opposite sign of the correlation coefficient is merely due to the opposite sign of the speed differences ) .this consistent qualitative difference between both algorithms with respect to the choice of initial vector was unexpected in view of the results in section [ sec : quadr - appr - with ] . upon closer inspection, it was revealed that the natural coordinate descent algorithm uses a scheme whereby the parameters of the quadratic approximation for coordinate ( i.e. , and ) are updated whenever there is a change in for some .in contrast , ` glmnet ` uses two separate loops , called `` middle '' and `` inner '' loop in . in the middle loop , the quadratic approximation to at the current solution is calculated , i.e. where in the inner loop , a penalised least squares coordinate descent algorithm is run until convergence using the approximation , i.e. keeping the values of and fixed. a poor choice of initial vector will therefore result in values of and that are far from optimal , and running a coordinate descent algorithm until convergence without updating these values would therefore result in a loss of efficiency .it is therefore plausible that the continuous updating of the quadratic approximation parameters in the natural coordinate descent algorithm explains its robustness with respect to the choice of initial vector . ) vs. model size between cold start ( ) and warm start ( ) on sub - sampled datasets for the natural coordinate descent algorithm ( * a * ) , logistic regression using ` glmnet ` ( * b * ) and linear regression using ` glmnet ` ( * c * ) .see main text for details . ]if this reasoning is correct , then the warm - start advantage should not be present if ` glmnet ` is used to solve penalised least squares problems , since in this case there is no middle loop to be performed . to test this hypothesis , i performed penalised linear regression ( lasso ) with ` glmnet ` on the same sub - sampled datasets , using the same binary response variable and the same penalty parameter values as in the previous logistic regressions .although the speed difference between cold and warm start now indeed followed a similar pattern as the natural coordinate descent algorithm ( spearman s ; figure [ fig : speed]c ) , in all but 20 datasets , warm start still outperformed cold start .this suggests that not updating the quadratic approximation at every step during logistic regression in ` glmnet ` may explain in part why it is more sensitive to the choice of initial vector , but additional , undocumented optimizations of the code must be in place to explain its warm - start advantage .the popularity of -penalised regression as a variable selection technique owes a great deal to the availability of highly efficient coordinate descent algorithms . for generalised linear models ,the best existing algorithm uses a quadratic least squares approximation where the coordinate update step can be solved analytically as a linear soft - thresholding operation .this analytic solution has been understood primarily as a consequence of the quadratic nature of the problem .here it has been shown however that in the dual picture where the penalised optimisation problem is expressed in terms of its legendre transform , this soft - thresholding mechanism is generic and a direct consequence of the presence of an -penalty term .incorporating this analytic result in a standard coordinate descent algorithm leads to a method that is not only theoretically attractive and easy to implement , but also appears to offer practical advantages compared to the existing implementations of the quadratic - approximation algorithm . in particularit is more robust to the choice of starting vector and therefore considerably more efficient when it is cold - started , i.e. when a solution is computed at set values of the -penalty parameter as opposed to along a regularisation path of descending -penalties .this can be exploited for instance in situations where prior knowledge or other constraints dictate the choice of -penalty parameter or in data - intensive problems where distributing the computations for sweeping the -penalty parameter space over multiple processors can lead to significant gains in computing time .future work will focus on developing such parallellized implementations of the natural coordinate descent algorithm and providing implementations for additional commonly used generalised linear models .with the notations introduced in section [ sec : introduction ] , let and . and are convex functions on which satisfy fenchel s duality theorem where and are the legendre transforms of and respectively .we have and . if then the term in this sum is , otherwise it is , i.e. if and otherwise .it follows that where . denoting , the minimiser must satisfy the optimality conditions : and for any index , choose .then and by eq ., assume and , where .then there exists such that , but this contradicts eq . .by lemma [ lem : legendre ] below , if , then .hence we have shown that denote .we find \hat \beta_{0,j } = w^t\hat\beta_0 - \sum_{j=1}^p \mu_j |\hat\beta_{0,j}|,\end{aligned}\ ] ] and hence by eq ., i.e. is also the unique minimiser of the penalised cost function . this concludes the proof of theorem [ thm : main ] .[ lem : legendre ] for all , we have for a given , let or from fenchel s inequality ( for all , cf . ) it follows that i.e. by assumption is differentiable and hence so is .it follows that ( see ) first , assume .by theorem [ thm : main ] we have , and hence and .this establishes the direction . if , then again by theorem [ thm : main ] and using the notation from section [ sec : natur - coord - desc ] , hence and . if , and , by convexity of u , similarly , if , we have . this establishes the direction .processed data files were obtained from https://tcga-data.nci.nih.gov/docs/publications/brca_2012/ : * normalised expression data for 17,814 genes in 547 breast cancer samples ( file ` brca.exp.547.med.txt ` ) . * clinical data for 850 breast cancer samples ( file ` brca_clinical.tar.gz ` ) .540 samples common to both files had an estrogen receptor status reported as positive or negative in the clinical data .estrogen receptor status was used as the binary response data , , and gene expression for all genes ( one constant predictor ) was used as predictor data , .processed data files were obtained from https://tcga-data.nci.nih.gov/docs/publications/coadread_2012/ : * normalised expression data for 20,531 genes in 270 colon and rectal cancer samples ( file ` crc_270_gene_rpkm_datac.txt ` ) . * clinical data for 276 colon and rectal cancer samples ( file ` crc_clinical_sheet.txt ` ) .266 samples common to both files had a tumor stage ( from i to iv ) reported in the clinical data .early ( i ii ) and late ( iii iv ) stages were grouped and used as the binary response data , , and gene expression for all genes ( one constant predictor ) was used as predictor data , .initialise .
|
the problem of finding the maximum likelihood estimates for the regression coefficients in generalised linear models with an sparsity penalty is shown to be equivalent to minimising the unpenalised maximum log - likelihood function over a box with boundary defined by the -penalty parameter . in one - parameter models or when a single coefficient is estimated at a time , this result implies a generic soft - thresholding mechanism which leads to a novel coordinate descent algorithm for generalised linear models that is entirely described in terms of the natural formulation of the model and is guaranteed to converge to the true optimum . a prototype implementation for logistic regression tested on two large - scale cancer gene expression datasets shows that this algorithm is efficient , particularly so when a solution is computed at set values of the -penalty parameter as opposed to along a regularisation path . source code and test data are available from http://tmichoel.github.io / glmnat/.
|
as of 2014 , the electricity market in japan is dominated by regional monopolies , where 85% of the installed generating capacity is produced by 10 privately owned companies . however , the rising of power producer and supplier ( pps ) ( i.e. electric power retailer ) in the electricity market is inevitable due to the full - fledged deregulation that will eventually break the monopolies . in this situation , ppss can provide the platform to potential prosumers to bring their surplus of energy to the market and to actively participate in the energy transactions .the energy management within pps is complicated and challenging with the integration of intermittent renewable energy sources .however , flexible power / energy operation as such is attainable through the advent of digital - grid architecture and seminal concept like `` eco net '' . the burden on _ _ utility _ _ and the net energy balance within a pps required to be minimized .therefore , day - ahead optimal energy matching amongst the customers is essential . in this paper, we introduce a distributed energy commitment framework for the customers subscribed ( subscribers ) under pps .based on the energy supply and consumption pattern as well as the capability of participation on the electricity market , a subscriber commits ; in a time - ahead fashion ( e.g. day - ahead ) to a certain energy profile for a particular time duration ( e.g. 30-minutes ) in future . performing a centralized energy commitment considering the geographical locations and heterogeneity of subscribers is computationally costly , communicatively expensive and is exposed to cyber - physical vulnerability . therefore, a distributed energy exchange scheme is proposed where the subscribers are grouped under several sub - service providers ( ssps ) .the nature of ssp can potentially be that of a microgrid or an aggregator ( can even be a virtual entity located in the vicinity of a pps ) .the basic energy exchange problem is re - organized as 1 ) at lower level , energy exchange between consumers and producers , 2 ) at middle level , energy exchange between ssps / aggregators / microgrids , and 3 ) at higher level , energy exchange with _ utility_. as the size of pps goes bigger , the communicative complexity at middle level increases .for this reason , a meshed network topology within ssps is inefficient and practically infeasible . to solve this issue ,a learning based coalition formation algorithm is adopted .this algorithm periodically provides the neighborhood map to the ssps in order to reduce the communicative complexity for practical distribution system .the distributed matching scheme is essentially formulated as a distributed assignment problem where the subscribers are presented with their prior - engaged energy commitment , commitment capabilities coupled with their preferences over other subscribers .we want to emphasize the fact that the system model presented in this paper is currently conceptual with solid business models for the potential stake holders .for example , utilizing the proposed model , pps makes profit by minimizing the energy transactions with _ utility _ in both spot and on - line imbalance market . on the other hand, the subscribers are paying less by purchasing energy locally ( consumers ) or earning more by selling energy locally ( prosumers ) , thereby essentially increasing the share of renewable sources ( so called customer - to - customer business model ) .further motivation is drawn from the fact that the feed - in - tariff ( fit ) scheme for encouraging higher renewable penetration will likely to be discontinued ( or subjected to major reform , due to incompetence of _ utility _ to _ gridization _ of renewables , especially pv ) , which makes prosumers open for merchandising the available renewables .the exact nature of the system model may vary with the ppss emphasizing their own requirement , customer segments and overall business model .the transactive energy ( te ) framework (with follow - up standard - based architecture and protocols , e.g. and ) follows the similar direction with an additional incorporation of different pricing schemes .the important distinction the proposed scheme brings is the influence of customer segmentation ( detailed in later sections ) and absence of detailed market - based control .the proposed method works with a minimum market - based control while depending on pre - engagement capability of the subscribers .however , due to its inherent distributed architecture , the proposed scheme can easily incorporate complex market based control .the distributed energy exchange problem can be realized through multiagent framework where each ssp / aggregator / microgrid is represented as a potential agent .works such as , are related to agent - based distributed operation and energy balancing .given the multiagent framework with selfish agents ( e.g. individual houses ) in places , the agents can reach cooperative equilibrium through distributed optimization .other similar streams of important research regarding multiagent system ( mas ) based energy management can be found in , , and . in ,the authors designed two - level architecture of distributed resource management of multiple interconnected microgrids . in , the authors extended their work by incorporating demand response in the mas - based resource management .we can align the proposed distributed energy matching framework with aforementioned ( and similar types ) researches .however , the important distinction between the proposed framework from the existing ones are the underlying physical power distribution system . the digital grid architecture and `` eco net'' based power exchange lay the underlying physical power distribution assumption for the proposed framework . the existing distribution system ( upon which most of the mas based demand side ems based on ) is not as flexible as proposed in seminal papers and , where power can be exchanged within multiple customers at the same time . for example , a producer ( e.g. generator in existing system , e.g. ) can supply multiple consumers ( e.g. loads ) at the same time by tagging the power facilitating ip - based power tagging introduced in . at the same time , a consumer also can receive energy from different producers .a small - scaled pilot program is already conducted ( in japan ) that demonstrates the flexible power exchange within two residential units .moreover , the proposed framework takes the advantages of microgrid coalition formation technique to periodically modify the communicative network topology . on the other hand ,demand flexibility is considered to be essential entity of optimal demand side operation , especially with renewables .the research conducted by and provide a certification for dr program while realizing necessity for a standard interchange of dr and a direct load control for handling dr program , respectively .dr is becoming an integrated part of renewable - fueled future grid with energy storage .a stream of recent research e.g. , , and tackle the issues related to robust scheduling of generation resources ( that include renewables as well ) considering energy storage with dr considering uncertainty in demand prediction and dynamic pricing .however , the interactions among demand side entities / players are not covered in these researches .therefore , the distributed energy matching platform is a real necessity that operates in a deregulated electricity market with an extensive participation of prosumers . in ,the authors provide a value co - creation business model for prosumer - oriented urban societies . similar class of research regarding the role of prosumers is also conducted by that essentially provides a game - theoretic operational scheme for residential distribution system with high number of participation from prosumers .the present research advances the state - of - the - art of prosumer - centric deregulated energy society by creating a energy commitment based platform that tries bind together all emerging energy players those otherwise are unable to participate in electricity market .the basic architecture of the proposed scheme and model is outlined in figure [ dist_model ] .the pps manages energy exchange among several ssps operating under that particular pps .the number of ssps depends on the service region , service specification and geographical coverage of that pps .the pps is thus responsible for dealing and managing energy and power within the specific service region .the breakdowns of subscribes are provided below .the subscribers are primarily either producers or consumers or a combination of both ( so called prosumers ) .a further breakdown to each subscriber group is possible considering the energy profile and commitment of the subscribers to the service .the commitment in this context is defined as the willingness ( with capability ) to consume or produce certain amount of energy ( that is committed earlier ) for the next day ( or another period in future ) .the brief description of each group of subscribers is presented below .active producer ( ap ) is a special class of energy producer that is committed to participate in distributed energy exchange scheme .such commitments from producers are integrated into the system that leads to a better and efficient energy management .an ap is able to declare the energy production before a certain period of time .typically , it can be realized as a day - ahead based scheme .passive producer ( pp ) is a special class of ap that can provide flexibility over the energy production .the ssp ( or pps ) utilizes the flexibility feature provided by the pp if necessary .the pp can use their spinning and operating reserve capacity with ramping ability to incorporate such signal .the ssp however , does nt utilize the whole flexibility in day - ahead operation .rather , the ssp keeps a certain fraction of the declared flexibility for mitigating on - line deviation .moreover , pv / wind based ders are not able to declare their energy production precisely for a future period due to the uncertainty in forecasting . in such case ,that kind of producers will fall into the category of pp and will declare the estimated production amount with associated confidence ( reported as flexibility ) .active consumer ( ac ) is classified as the consumer who joins the scheme and provides the future energy demand ( in the form of prior - engaged capability ) and a list of preferred aps from which that ac wants to receive energy .the preference list gives some control to ac over choosing their preferred energy break down .for example , an ac may prefer renewable powered aps over other aps .a passive consumer ( pc ) is a special class of ac that does not commit entirely regarding the energy consumption and also should be ready to compromise on the energy usage .for example , in a dr program ( as a direct control ) , the ssp issues a signal to a particular pc to shut - down one or more power - hungry devices .based on the flexibility ( reported as a percentile of demand reduction ) , the pc can react to such signal .figure [ time_interaction ] shows the high - level interactions among subscribes ( operating under ) , _ utility _ and other ssps ( except , ) .the whole operation is divided into two phases .phase 1 is operating on an _n_-time ahead ( e.g. day - ahead ) level while phase 2 is operating on real - time level . in phase 1 , the subscribers are assumed to provide their predicted energy profiles for a certain future period .the distributed energy matching operation within local subscriber and external ssps is performed in this phase .the distributed energy matching operation is actually solving a distributed optimization problem which tries to attain local matching objectives while taking the external ssps energy status into account in order to achieve the objectives of global matching , energy balancing and reduced interaction with _ utility_. the matching operation takes care of different preferences provided by the subscribers ( active and passive ) and provides them the decision regarding the volume of energy needed to be exchanged ( in a future period ) .such decisions come as commitment for both active and passive subscribers .the matching engine also provides the unavoidable ( yet required ) energy interactions to the _ utility _ as a forward market interaction . the associated ssps ( also based on the decision of distributed matching )are also informed with the required energy transactions . in real - time operation, the actual energy exchange ( based on the committed volume of energy ) takes place within the subscribers and _ utility_. however , due to the external factors , sudden change in energy demand and energy supply might occur which creates deviation between supplied energy and total demand . the detailed real - time operation is exempted in this article since it requires elaborate descriptions .this section presents an examples to clarify the framework and energy matching process .figure [ example_e_matching ] shows a simple exemplary energy matching procedure among 3 active consumers ( ac ) , 1 passive consumer ( pc ) , 2 active producers ( ap ) , and 1 passive producer ( pp ) for a day - ahead operation ( accord to n - time ahead operation ) .the energy matching will be conducted for 10 am in next day .this example assumes that , the consumers and producers are able to provide their own energy profiling . therefore , the prediction engine is not in the action for this example .passive subscribers ( pc and pp ) come up with their flexibility of 20% and 30% , respectively . in the case of the pc, 20% flexibility refers to the reduction of demand down to 20% ( i.e. for pc # 1 , it can bring down the demand from 12 kwh to 9.6 kwh , if the pps instructs to do so ) .and , in case of the pp , 30% flexibility refers to the increase in production up to 30% ( i.e. for pp # 1 , it can increase the production from 10 kwh to 13 kwh , if the pps instructs to do so ) .after the pps accumulates all the requested and potential supply quantity of energy from consumers and producers , respectively , the energy matching operation started .finally , the output is tabled summarizing the energy transactions among consumers and producers .it is noted that , the total supply is 52 kwh to 55 kwh ( with the flexibility of pp # 1 ) and the total demand is 57 kwh ( can be reduced to 54.6 kwh with the flexibility of pc # 1 ) . in the ideal case with no flexibility ( i.e. demand is 57 kwh andsupply is 52 kwh ) , the _ utility _ will be required to provide additional 5 kwh to nullify the gap between supply and demand .however , the pps utilized the flexibility of passive customers and zeros the _ utility _ interaction by instructing pc # 1 to reduce the consumption down to 20% and pp # 1 to increase the production up to 26% ( out of 30% flexibility ) .the resultant table can be read as ( e.g. row 1 ) ; ap # 1 is committed to provide 5 , 8 , 11 and 6 kwh to ac # 1 , ac # 2 , ac # 3 , and pc # 1 , respectively at 10 am tomorrow . the same way , column 2 can be read as ; ac # 2 is committed to receive 8 , 5 , 5 kwh of energy from ap # 1 , ap # 2 and pp # 1 , respectively and no energy from the _ utility_. the decision on energy matching can have multiple optima ( i.e. multiple solutions can be achievable while realizing the same objective ) . however , depending on criteria such as , the preferences ( such preference may be , e.g. ac # 1 prefers ap # 1 over ap # 2 to provide higher fraction of requested energy ) , fairness policy ( such policy may be , e.g. pps provides certain advantage to ac # 1 while respecting ac # 1 s preference ) , etc ., a single solution can be attainable .these features can be included while design specific services . in the real time operation , however , the pps might ask passive customers ( that contain the flexibility unused in the day - ahead operation ) to adjust the real time demand - supply gap . for instance , in the presented example , pp # 1 can increase the production slightly ( 4% ) in the real time operation ( possibly by utilizing the operating reserve ) .pricing assumption is very important while economically modeling and realizing the system .the designed pricing mechanism should be able to provide enough incentives to all the stakeholders to join the scheme .the main motivations of pricing design are * a pp / ap should sell per unit energy to pps in a price higher than what that pp / ap sells per unit energy to _ utility _ * a pc / ac should buy per unit energy from pps in a price lower than what that pc / ac uses to buy from external _ utility _these assumptions are ensured by the pps .moreover the subscribers are assumed to provide their true predictions regarding energy consumption or production .the subscribers , therefore , are in a _no - game _ situation and avoid strategic interactions .strategic interactions are , however , obvious when ssps are exposed to different pricing environment ( stronger market - based control ) .for example , a pc that participates in dr program evidently provided with economic incentive to act as one . on the other hand , a pp should receive additional monetary value in case of activating the spinning reserve . at the same time, the pps has to ensure that a subscriber does not deviate significantly from its commitment by implementing pricing scheme as . in this case , _ game theory _ based analysis can be applied to _nash_-out the situation in order to find the associated equilibrium .an immediate follow - up research will concentrate on appropriate pricing mechanism design for subscribes as we limit this contribution to framework introduction and required matching algorithm .an ssp realizes the following objectives through a distributed optimization problem , i ) minimizing the energy transactions with _ utility _ , ii ) maximizing the local energy transactions within local customers , iii ) respecting the preference of consumers ( a / p)c .the multiple objectives for the above optimization problem are described in this section .a particular ssp , tries to attain the multi - objectives , defined in ( [ main_obj_eq ] ) .the set of ssps working under the particular pps other than is assumed as . for notational simplicity , ( a / p)c is presented by and ( a / p)p is represented by .eq , ( [ main_obj_eq ] ) is the scalarized multi - objective function with weights corresponding individual objective .\\ + w_{2}\times&\left [ \displaystyle \sum_{i\in ac_s}cm(i , u ) \right ] \\ -w_{3}\times&\left [ \displaystyle \sum_{i\in ac_s\setminus \{u\}}\sum_{j \in ap_s \setminus \{u\}}{}(cm(i , j)+\alpha\times[\beta - pt(i , j ) ] ) \right ] \\ -w_{4}\times&\left [ \displaystyle \sum_{i\in ac_s\setminus \{u\}}\sum_{j \in ssp_{-s}}{}cm(i , j ) \right ] \\ -w_{5}\times&\left [ \displaystyle \sum_{i\in ac_s\setminus \{u\}}\sum_{j \in ssp_{-s}}{}(cm(i , j)+\alpha\times[\beta - pt(i , j ) ] ) \right ]\\ \end{matrix}\right . \label{main_obj_eq}\ ] ] in ( [ main_obj_eq ] ) , presents the weight related to regulatory objective function .this objective function ensures the active subscribers are served before passive ones , which is denoted . is typically , decided by the contract between pps and subscriber .the _ utility _ is exempted from the list of acs .the objective function weighted by minimizes the energy exchange with _utility_. the preference of each subscriber is respected by the objective function associated with the weight . the energy information received from other ssps ( described by )are handled in the objective functions weighted by and .the weights and describes maximizing the energy exchange with other ssps and maximizing the preferences of acs ( belonging to ) towards other ssps , respectively . while analyzing ( [ main_obj_eq ] )it can be noticed that , the objectives weighted by and are essentially carrying the same variables . in objective weighted by ,the index represents a member of the local producers set , . while in objective weighted by ,the index is the set of other that are physically connected with .therefore , we can easily merge these two objectives into one .similarly , objectives weighted by and can be merged into one . by combining these similar objectives in ( [ main_obj_eq ] ) , ( [ mod_obj_eq ] )is formed as follows .\\ + w_{2}\times&\left [ \displaystyle \sum_{i\in ac_s}cm(i , u ) \right ] \\ -w_{35}\times&\left [ \displaystyle \sum_{i\in ac_s\setminus \{u\}}\sum_{\substack{j \in ap_s\cup ssp_{-s}\\ \setminus \{u\}}}{}[\substack{(cm(i , j)+\\\alpha\times[\beta - pt(i , j ) ] ) } ] \right ] \\ \end{matrix}\right . \label{mod_obj_eq}\ ] ] where represents the _ utility_. the decision variables and or are the committed energy to be exchanged between subscriber and , and flexibility of commitment , respectively .since , ( [ mod_obj_eq ] ) is an objective function for distributed matching operation , at certain times represents the set of local producers and different ssps ( other than ) except the _utility_. the weight vector is calibrated utilizing a _ local search _ algorithm .come to the description of ( [ mod_obj_eq ] ) , represents the serving priority of ( a / p)cs which will be respected by the ssp while deciding . since local energy transaction is preferred over the transaction with other ssp ( which in turn , preferred over the transaction with _ utility _ ) , a priority table ( as is defined ( in the 3rd line of ( [ mod_obj_eq ] ) ) that describes the preference .basically , the transaction order can be defined as the following preference relation ( is transaction inside ssp , and is the transaction outside ssp but inside pps with other ssps ) . the , in case of local transaction , defines the preference of customer regarding energy source ( e.g. if prefers green energy over cheap energy , it will prefer a certain over others ) . back to ( [ mod_obj_eq ] ) ,the 2nd part objectifies the minimization of energy exchange with _utility_. note that , for the transactions defined in 1st and 3rd part , no _utility _ is involved .the above objective function is subjected to the following constraints ( and ) \leq fx(i)\leq 1 , \foralli \in ac_s \label{flex_bound_ac}\ ] ] , \forall j\in ap_s \label{flex_bound_ap}\ ] ] we assume , the _ utility _ , , can sell or buy any amount of energy .( [ const_producer ] ) constraints the total supply should not exceed the total production ( ) of a producer .( [ const_consumer ] ) constraints the total demand , i.e. of a consumer must be met .the terms and , constrained by ( [ flex_bound_ac ] ) and ( [ flex_bound_ap ] ) , respectively define the associated flexibilities . in case of active subscriber ( i.e. ap or ac ) , ( by placing ) . the defines the maximum flexibility a passive subscriber can afford .for example , if a pc can reduce 20% of energy consumption , its will be _0.2_. similarly , if a pp can increase 10% of its committed production , its will be _0.1_. therefore , the optimizer can decide within _ 0.8 _ to _ 1 _ for a pc and _ 1 _ to _ 1.1 _ for a pp .the transmission line constraints can be implemented by bounding the commitment matrix within the upper and lower power flow capacity . considering the loss function with lower and upper power flow capacity , and ( respectively ), the simple transmission line constraint can be implemented as ( [ eq_tlc_bound ] ) .the reformed distributed matching problem equivalently casted as security constrained unit commitment ( scuc ) and can be efficiently solved utilizing methods such as . in this paper , however , the scuc is not considered . matrix ] the binary matrix , in ( [ const_consumer ] ) presents the local physical connectivity between two subscribers .the matrix is provided by the pps and is periodically updated reflecting the demand and supply profile .the mapping is envisioned to contain both local and distributed communication and physical network infrastructure ; the local portion of the mapping contains connectivity information within where the distributed portion represents the connectivity information ( that is an outcome of a _ learning based coalition formation _ algorithm , detailed in section [ comm ] ) among and .the formation of matrix is shown in figure [ matrix_n ] for an exemplary case of 3 ssps .the tables in figure are drawn from the perspective of comprising of 5 acs and 4 aps .the inter ssp connectivity is determined by the _ learning based coalition formation _ engine .ssps exchanges the aggregated energy ( surplus information that is essentially the outcome of the distributed optimization ) that makes them potential producers in the distributed matching process . in this example , can exchange energy with , but not with .the objective function associated with weight in ( [ mod_obj_eq ] ) shows the aggregated decision variable of ssp , ( ) with local consumers ( ) .the distributed matching operation thus requires aggregated energy surplus information from a neighboring ssp , , as appeared in the constraint ( [ const_producer ] ) by .the ssp , also needs to provide flexibility bound for for the constraint in ( [ flex_bound_ap ] ) .the for an ssp , is calculated as ( private calculation of that is hidden to other ssps ) }{\sum_{l\in sap_{j}}[ep(l)-cm(.,l ) ] } - 1 \label{bound_j}\ ] ] where , is the set of in that are able to supply energy , i.e. >0 $ ] ( ) and is the total commitment of energy production for to the local acs of .the associated flexibility bound is provided by . in other words , ( [ bound_j ] ) provides the weighted flexibility bound calculated over all valid producers who are able to provide energy .the designed distributed optimization formulation in ( [ mod_obj_eq ] ) has a special feature that counts into the privacy of each subscriber .certain regulations ( e.g. in japan and eu countries ) do not allow the subscribers ( or consumers ) to share their private energy information with peers ( e.g. neighbor subscribers ) due to the security reason ( i.e. sharing smart - meter data ) .therefore , most of the multiagent based distributed optimizations ( e.g. ) that require exchanging energy information with each other in order to reach a collaborative optimized energy profile ( e.g. peak reduction , demand response , etc . )become obsolete in situation as such .the aggregated energy profile of subscribers , however , does not have such issue since it hides the exact energy information of individual subscriber ( that is private to that particular subscriber ) . in the proposed architecture , only a designated controller ( i.e. the ssp , where the distributed optimization algorithm is hosted ) has the exact energy breakdown information of each of its subscribers .no subscriber knows its peer s energy profile while partaking in the distributed optimization .therefore , local privacy measures of consumers are sustained that makes the proposed distributed energy matching scheme applicable in a relatively private society . moreover , while performing the distributed energy matching with neighboring ssps ( and with the _ utility _ ) , a particular ssp only shares its aggregated energy surplus information coupled with associated aggregated flexibility instead of the entire energy information of its subscribers .the formation of distributed optimization in ( [ mod_obj_eq ] ) , with constraint as ( [ const_producer ] ) , ensures that only aggregated energy profile is required to reach the convergence ( i.e. ) .the objective function and constraints are linear in nature that makes the distributed assignment problem solvable by lp .the weights , , and describe associated preferences for each of the objective functions and can be fixed by a pps .the detailed algorithm is shown in algorithm [ dm ] .the algorithm [ dm ] describes the operation of a particular ssp while performing the local and distributed optimization .the algorithm takes into the demand and energy capacity of consumers and producers , respectively with associated flexibility limits .additionally , it also requires the preference of local consumers ( over local producers ) and fairness policy of local consumers .the outcomes of algorithm [ dm ] are the optimal energy commitment within local subscribers as well as peer ssps ( that are physically connected and are belonging to same coalition ) and the optimal flexibility of passive subscribers .an interesting feature of algorithm [ dm ] is that , the broadcasted ( energy ) surplus information is implemented using a synchronous lock . doing that ensures that a particular load ssp takes the maximum available surplus energy ( from a generator ssp ) to balance its load .therefore , the energy commitment matrix is already integrated the minimum possible energy exchange ( figure [ graph_matching ] clears the point ) .the distributed matching process between two ssps is shown in figure [ dist_e_matching ] .for instance , and iteratively optimize their local energy exchange coupled with energy exchange within themselves .these two ssps continue to do such distributed exchanging until a certain convergence criterion is encountered .the distributed matching will converge when no surplus of energy is available .the interconnections of the communicative and power network within the ssps tend to increase exponentially with the increase in number of ssps .therefore , it is essential to reduce the network complexity without much compromising with optimality . to remedy this issue ,a neighborhood map generation process is designed .the process essentially takes the advantage of microgrid coalition formation method with historical coalition formation information thereby , periodically updates the neighborhood map of each ssp .the coalition formation engine utilizes the periodically energy status of each ssp ( i.e. surplus or deficit of energy information ) to create groups of interconnected ssps .the groups ( and the ssps in them ) are optimally created so that the energy interactions inside a group are maximized .the pps utilizes the historical energy status of each ssp where the coalition engine in pps determines the energy based coalition among ssps ( step 1 ( s1 ) ) .the coalition engine maintains a belief neighborhood map ( bnm ) that contains a skeleton of neighborhood map using a probabilistic prior ( step 2 , s2 ) .the bnm is created by a __ approach that takes into account the periodical formed coalitions .an exemplary creation of bnm is shown in figure [ network_map_example ] considering three ssps ( 1 , 2 , and 3 ) for a particular period ( hour 1 ) . as seen from the figure, the current bnm ( which was created by statically analyzing the hour 1 s energy status of past n-1 days ) is updated when the n - th day s energy status of hour 1 .an actual neighborhood map ( anm ) is generated by taking a snapshot of bnm .the snapshot is essentially probabilistic realization of bnm ( shown in figure [ network_map_example ] .the pps delegates the anm only if there is a significant update in _ updated bnm _ ( in step 3 ) compared to _current bnm_. as seen in figure [ network_map_example ] , the probability that ssp#1 and ssp#2 are in same coalition changes from _ 0.5 _ to _ 0.8 _ after incorporating the new evidence . in this case , therefore , the pps delegates the anm to associated ssps .the distributed matching operation ( in ssp ) utilizes the anm ( or an updated anm in case of delegation from pps ) to perform matching operation with the exchange of energy information with other ssps .the anm is the ( portion of ) binary network matrix that was defined in algorithm [ dm ] . in step 4 ,the off - line matching decisions are provided as shown in figure [ time_interaction ] .the updated energy status ( prior to matching operation ) of that particular ssp is sent back to pps s coalition engine ( in step 5 ) to update the bnm .this section presents some experiments with associated discussions that are conducted to show the effectiveness of the proposed distributed matching scheme .tokyo s residential demand data ( with / without pv installation ) are taken and scaled up by adding random variance ( typically , _ normal distribution _ ) .several case studies are presented .the 1st study considers the effect of coalitions over a hypothetical distribution system containing no passive customers . for this purpose , a distribution system with 20 ssps ( operating under 1 esp ) ,each having 10 consumers and 5 producers , is considered .the energy statuses of these 15 customers ( 10 + 5 ) are randomly generated .for example , at a particular matching instance , the total demand in _ssp#2 _ is _ 127.1 _ kwh ( distributed over 10 consumers ) and total supply in _ssp#2 _ is _ 76.1 _ kwh ( distributed over 5 producers ) .therefore , _ for ssp#2 _ , the energy status is _-51 _ kwh ( saying , it will require 51 kwh from _ utility _ ) .the coalition formation engine produces 9 coalitions out of 20 ssps . for example , ssps _ 16 , 2 , 11 , 6 , and 7 _ form a coalition ( provided by the anm , considering energy status profile of all ssps at a certain time period ) .so , they will exchange energy within themselves in addition to their own local exchange .the comparison of ( absolute ) energy status ( essentially the _ utility _ interactions , before and after performing distributed matching operation ) is shown in figure [ graph_e_status ] .for comparison , we also showed the meshed interactions ( i.e. 1 coalition ) .the 9 coalition case can not reach the ultimate convergence since it used only limited communication network .the matching operation , as expected , reduces the energy interaction with _utility_. coming back to _ssp#2 _ , the energy interaction is reduced with the _utility_. interestingly , for some of the ssps ( e.g. _ ssp#11 _ , the matching operation using 9 coalition completely reduces _ utility _ interactions while using 1 coalition still decides to sell back _ 22.5 _ kwh to _utility_. the _ iteration _ in this context is basically recorded when one of the ssps settles on its local and distributed exchange in the respective scopes . the reduction pattern of absolute energy status accumulated over all ssps ( i.e. the _ utility _ interactions for the esp ) with each _ iteration _ is presented in figure [ graph_convergence ] for both of the cases . in brief , the _ utility _ interaction is reduced from _ 714.6 _ kwh to _ 257 _ kwh ( for 9 coalitions ) and to _ 109 _ kwh for 1 coalition .we have chosen _ ssp#2 _ to detail the distributed energy matching operation . for the time being ,all the consumers and producers are considered as ac and ap , respectively ( by setting the variable as ) .the incorporation of passive customer is thus straight forward .the energy exchange within the aps and acs ( inside _ ssp#2 _ ) as well as interaction of acs with _ssp#11 _ is pointed in figure [ graph_matching ] .for example , _ac#2 _ of _ ssp#2 _ will receive _ 5.83 _ kwh from _ssp#11_. note that , only _ ssp#11 _ participates in the energy transaction .therefore , the proposed distributed algorithm ( algorithm [ dist_e_matching ] ) ensures that a minimum number of external ssps will be involved in the transaction .the distributed matching takes 21 _ iterations _ to converge .one of the advantages of incorporating distributed energy matching is the reduced algorithmic complexity .the complexity ( consequently , the execution time ) of centralized matching increases exponentially with the number of customers . therefore , we have presented a comparative study showing the execution times in centralized and distributed matching ( 1 coalition ) over the dimension of the problem .the dimension is defined as . the execution time comparison with centralized matching (i.e. is 1 ) with distributed matching ( considering case 3 , is 10 ) is shown in figure [ graph_et_comparison ] .the distributed matching scheme dominates significantly ( computationally ) over its centralized counterpart for large and diverse distribution system .the 2nd study considers an increasing number of subscribers ( 35 consumers and 10 producers ) for each of the 20 ssps . among 35 consumers ,10 are assumed to be passive consumer ( pc ) with maximum flexibility of 15% and the remaining 25 are active consumers ( ac ) . at the same time , among 10 producers , 5 are assumed to be passive producers ( pp ) with maximum flexibility of 10% while the remaining 5 are assumed as active producers ( ap ) . for the sake of simplicity , in this case , the coalition scheme is avoided ( unlike the 1st study ) .the flexibility of passive consumers ( pc ) for four ssps are provided in figure [ flex_pc ] . as mentioned before, the flexibility of a pc determines the reduction fraction of the energy consumption .the figure points out the flexibility status of pcs .for example , pc#6 of ssp#6 needs to reduce its energy down by 6% in order to maximize the local energy matching . in other words ,pc#6 , which can reduce its energy maximum of 15% due to the event such as dr , will only need to reduce 6% because of the distributed optimization process . on the other hand, ssp#1 does not need any of its pcs to reduce down the consumption ( unlike ssp#11 , where all of its pc need to reduce down to the maximum fraction ) .similarly , figure [ flex_pp ] shows the flexibility of different passive producers ( pps ) .the effect of increasing number of passive subscribers on the distributed matching process can be seen in figure [ passive_effect ] .the pattern-1 describes the convergence pattern ( measured by the absolute energy transactions with _ utility _ ) of the 2nd study while pattern-2 depicts the effect when the number of passive subscribers is increased . for pattern-2 ,number of pp is increased from 5 to 7 and number of pc is increased from 10 to 20 while keeping the total number of subscribers same . as seen in figure [ passive_effect ] ,the increase in passive subscribers essentially increases the accumulate flexibility of the matching process and hence producing a better convergence graph than that of in 2nd study . specifically speaking , increasing the number of passive subscriber as mentioned before reduces the _ utility _ interactions from roughly _ _126__kwh to __ 92__kwh by utilizing the consequent increased flexibility .the absolute energy transaction goes further down ( to _ _26__kwh ) when all of the subscribers are passive ( that consequently further increases the flexibility of the system ) .the phenomena is plotted as pattern-3 in figure [ passive_effect ] .we introduce a secured and distributed demand side management framework for future smart distribution grid taking the advantage of flexible power exchange architecture ( such as digital - grid , and `` eco net'' ) . the framework is particularly designed for new market entrant ( e.g. power producer and supplier , _ pps _ ) operating on deregulated energy market in japan . along the way, we have proposed a distributed matching algorithm that decides on a scheduled commitment ( determined -time ahead ) of energy exchange to be followed in real - time .we have identified the boundaries and roles of potential players who can be benefited from the proposed matching scheme .the contributions of the article are two - fold : 1 ) the energy matching service design for the future electricity market by strategically grouping subscribers based on their ability to commit to a certain energy profile , 2 ) the secured distributed matching operation considering a very high number of subscribers , their preferences and commitments .therefore , the proposed framework aligns with the pps s business model . a _ learning based coalition formation _method for adaptive network mapping is also proposed that essentially provides the reduced communicative network for energy exchange within the players . as of now , we have showed the day - ahead distributed energy matching scheme .the security and privacy of each subscriber are maintained by avoiding exposition of peer energy information ( down to subscriber level ) and by ensuring minimized and aggregated energy interactions within aggregators ( i.e. ssps ) .therefore , the proposed framework aligns perfectly with the security and privacy requirement from relatively conservative electricity market .although , the proposed framework focuses on japanese power market , the framework can be effectively utilized in other markets as well by making appropriate assumptions .the article is particularly limited to the off - line ( -time ahead ) energy commitment amongst subscribers , ( microgrids ) and _ utility_. the design of pricing scheme is not provided in the manuscript .however , the pricing schemes assumed to be _ incentive compatible_. a. n. resources and energy , `` electricity market reform in japan , '' ministry of economy , trade and industry , tech .rep . , oct 2014 .[ online ] .available : http://www.meti.go.jp/english/policy/energy_environment\/electricity_system_reform/pdf/201410emr_in_japan.pdf t. e. framework , `` gridwise transactive energy framework version 1.0 , '' gridwise architecture council , tech .rep . , 2015 .[ online ] .available : http://www.gridwiseac.org/pdfs/te_framework_report_pnnl-22946.pdf k. kok , z. derzsi , m. hommelberg , c. warmer , r. kamphuis , and h. akkermans , `` agent - based electricity balancing with distributed energy resources , a multiperspective case study , '' in _ hawaii international conference on system sciences , proceedings of the 41st annual _ , jan 2008 , pp .173173 .mohsenian - rad , v. wong , j. jatskevich , r. schober , and a. leon - garcia , `` autonomous demand - side management based on game - theoretic energy consumption scheduling for the future smart grid , '' _ smart grid , ieee transactions on _ , vol . 1 , no . 3 , pp .320331 , dec 2010 .h. s. v. s. k. nunna and s. doolla , `` multiagent - based distributed - energy - resource management for intelligent microgrids , '' _ ieee transactions on industrial electronics _60 , no . 4 ,pp . 16781687 , april 2013 .s. rivera , a. m. farid , and k. youcef - toumi , `` a multi - agent system transient stability platform for resilient self - healing operation of multiple microgrids , '' in _ innovative smart grid technologies conference ( isgt ) , 2014 ieee pes _ , feb 2014 , pp . 15 .s. chakraborty , s. nakamura , and t. okabe , `` real - time energy exchange strategy of optimally cooperative microgrids for scale - flexible distribution system , '' _ expert systems with applications _ , vol .42 , no .10 , pp . 4643 4652 , 2015 .m. alczar - ortega , c. calpe , t. theisen , and j. rodrguez - garca , `` certification prerequisites for activities related to the trading of demand response resources , '' _ energy _ ,93 , part 1 , pp .705 715 , 2015 .e. heydarian - forushani , m. golshan , m. moghaddam , m. shafie - khah , and j. catalo , `` robust scheduling of variable wind generation by coordination of bulk energy storages and demand response , '' _ energy conversion and management _ , vol .941 950 , 2015 . c. zhao , j. wang , j .-watson , and y. guan , `` multi - stage robust unit commitment considering wind and demand response uncertainties , '' _ power systems , ieee transactions on _ , vol .28 , no . 3 , pp . 27082717 ,aug 2013 .m. izvercianu , s. a. seran , and a .- m .branea , `` prosumer - oriented value co - creation strategies for tomorrow s urban management , '' _ procedia - social and behavioral sciences _ ,149 156 , 2014 , challenges and innovations in management and leadership12th international symposium in management . n. zhang , y. yan , and w. su , `` a game - theoretic economic operation of residential distribution system with high participation of distributed electricity prosumers , '' _ applied energy _471 479 , 2015 . c. lee , c. liu , s. mehrotra , and m. shahidehpour , `` modeling transmission line constraints in two - stage robust unit commitment problem , '' _ ieee transactions on power systems _ , vol .29 , no . 3 , pp .12211231 , may 2014 .c. s. w. group , `` introduction to nistir 7628 guidelines for smart grid cyber security , '' national institute of standards and technology , tech . rep ., sept 2010 .[ online ] .available : http://www.nist.gov/smartgrid/upload/nistir-7628_total.pdf
|
this paper introduces a demand - side distributed and secured energy commitment framework and operations for a power producer and supplier ( pps ) in deregulated environment . due to the diversity of geographical location as well as customers energy profile coupled with high number of customers , managing energy transactions and resulting energy exchanges are challenging for a pps . the envisioned pps maintains several aggregators ( e.g. microgrids ) , named as sub service provider ( ssp ) that manage customers / subscribers under their domains . the ssps act as agents that perform local energy matching ( inside their domains ) and distributed energy matching within ssps to determine the energy commitment . the goal of the distributed energy matching is to reduce the involvement of external energy supplier ( e.g. _ utility _ ) while providing a platform to demand side players to be a part of energy transaction . a distributed assignment problem is designed that requires minimum and aggregated information exchange ( hence , secured ) and solved by linear programming ( lp ) that provides the distributed matching decision . the communicative burden among ssps due to the exchange of energy information is reduced by applying an adaptive coalition formation method . the simulations are conducted by implementing a synchronous distributed matching algorithm while showing the effectiveness of the proposed framework . distributed energy service , microgrid , distributed optimization , demand - side management , mixed integer linear programming .
|
the clt is an essential tool for inferring on parameters of interest in a nonparametric framework .the strength of the clt stems from the fact that , as the sample size increases , the usually unknown sampling distribution of a pivot , a function of the data and an associated parameter , approaches the standard normal distribution .this , in turn , validates approximating the percentiles of the sampling distribution of the pivot by those of the normal distribution , in both univariate and multivariate cases .the clt is an approximation method whose validity relies on large enough samples . in other words ,the larger the sample size is , the more accurate the inference , about the parameter of interest , based on the clt will be .the accuracy of the clt can be evaluated in a number ways .measuring the distance between the sampling distribution of the pivot and the standard normal distribution is the common feature of these methods . naturally , the latter distance is a measure of the error of the clt .the most well known methods of evaluating the clt s error are berry - essen inequalities and edgeworth expansions .these methods have been extensively studied in the literature and many contributions have been made to the area ( cf ., for example , barndorff - nielsen and cox , bentkus _ et al . _ , bentkus and gtze , bhattacharya and rao , dasgupta , hall , petrov , senatov , shao and shorack . despite their differences , the berry - essen inequality and the edgeworth expansion , when the data have a finite third moment , agree on concluding that , usually , the clt is in error by a term of order , as , where is the sample size . in the literature ,the latter asymptotic conclusion is referred to as the first order accuracy or efficiency of the clt .achieving more accurate clt based inferences requires alternative methods of extracting more information , about the parameter of interest , from a given sample that may not be particularly large . in this paperwe introduce a method to significantly enhance the accuracy of confidence regions for the population mean via creating new pivots for it based on a given set of data .more precisely , by employing appropriately chosen random weights , we construct new randomized pivots for the mean .these randomized pivots are more symmetrical than their classical counterpart the student -statistic and , consequently , they admit clts with smaller errors for both univariate and multivariate data .in fact , by choosing the random weights appropriately , we will see that the clts for the introduced randomized pivots , under some conventional conditions , can already be second order accurate ( see sections [ error of convergence ] and [ multivariate pivots ] ) .the randomization framework in this paper can be viewed not only as an alternative to the inferences based on the classical clt , but also to the bootstrap .the bootstrap , introduced by efron , is a method that also tends to increase the accuracy of clt based inferences ( cf . ,e.g. , hall and singh ) .the bootstrap relies on repeatedly re - sampling from a given data set ( see , for example , efron and tibshirani ) .the methodology introduced in this paper , on the other hand , reduces the error of the clt in a customary fashion , in both univariate and multivariate cases , and it does not require re - sampling from the given data ( see remark [ comparison to the bootstrap ] below for a brief comparison between the randomization approach of this paper and the bootstrap ) . for confidence regions based on clts to capture a parameter of interest , in addition to the accuracy , it is desirable to also address their volume . in this paperwe also address the volume of the resulting confidence regions based on our randomized pivotal quantities in both univariate and multivariate cases . in the randomization framework of this paper , and in view of the clts for the randomized pivots introduced in it , studying the volume of the resulting randomized confidence regions for the mean is rather straightforward .this , in turn , enables one to easily trace the effect of the reduction in the error , i.e. , the higher accuracy , on the volume of the resulting confidence regions . as a result, one will be able to regulate the trade - off between the precision and the volume of the randomized confidence regions ( see section [ length of the confidence intervals for mu ] , section [ multivariate pivots ] and appendix i ) .the rest of this paper is organized as follows . in section [ main results ]we introduce the new randomized pivots for the mean of univariate data . in section[ error of convergence ] we use edgeworth expansions to explain how the randomization techniques introduced in section [ main results ] result in a higher accuracy of the clt . in section [ length of the confidence intervals for mu ] , for univariate data , we investigate the length of the confidence intervals that result from the use of the randomized pivots introduced in section [ main results ] . extensions of the randomization techniques of section [ main results ] to classes of triangular random weights are presented in section [ multinomially weighted pivots ] .generalization of the results in sections [ main results ] and [ edgeworth exapnsions ] to vector valued data are presented in section [ multivariate pivots ] .let , , be i.i.d .random variables with , and .the student -statistic , the classical pivot for , is defined as : where and are the sample variance and the sample mean , respectively . under the assumption ,the berry - essen inequality and the edgeworth expansion unanimously assert that , without restricting the class of distributions of the data , converges in distribution to standard normal at the rate , i.e. , the clt for is first order accurate .we are now to improve upon the accuracy of by using a broad class of random weights .the improvement will result from replacing the pivot by randomized versions of it that continue to serve as pivots for .we now define the aforementioned randomized pivots for , as follows : where s are some random weights and , to which we refer as the window , is a real valued constant . the weights s and the window constant are to be chosen according to either one of the following two scenarios , namely , method i and method ii .+ to construct the randomized pivot in this scenario , we let the weights be a _ random sample _ with .moreover , these weights should be _ independent _ from the data .the window constant , should be chosen in such a way that it satisfies the following two properties : + + ( i ) , + + ( ii ) , + + where is a given number such that can be arbitrary small or zero .+ the notation is an abbreviation for * * s**kewness * * r**educing * * f**actor ( see ( [ eq 2 ] ) below for a justification for this notation ) .the weights s in method i can be generated , independently from the data , using some statistical software .this remark is applicable to all randomized pivots discussed in this paper .* discussion of method i : when the weights have a skewed distribution * + in terms of the error of the clt , an ideal realization of condition ( ii ) of method i could be when the weights s have a skewed distribution and the window constant is a real root for the cubic equation , i.e. , when . condition ( ii ) of method i is so that it also allows generating the s from skewed distributions and finding a window constant such that and is close enough to zero . hence , when , but is chosen to be small , then does not necessarily have to be a root of the equation . as it can be inferred from the results in section [ error of convergence ] , the closer the value of the is to zero , the smaller the error of the clt for , as in ( [ eq 2 ] ) , will be . + + * discussion of method i : when the weights are symmetrical about their mean * + when the s are generated from a distribution that is symmetrical about its mean , in view of method i , a refinement can be achieved by taking the window constant to be close to but not equal to it .this choice of will result in that are not exactly zero , but can be arbitrarily close to it ., , where s are i.i.d ( 1 ) with empirical pearson s measure of skewness equal to 1.98 .panel ( b ) is the frequency histogram of the randomized data , where the weights s are i.i.d . , and srf , with empirical pearson s measure of skewness equal to .panel ( c ) is the frequency histogram of the randomized data , where the weights s are i.i.d .bernoulli(1/3 ) , and , with empirical pearson s measure of skewness equal to . , width=14 ] method ii that follows can also be used to construct a more accurate randomized pivot , as defined in ( [ eq 2 ] ) , via generating the random weights from some symmetrical distributions . in this scenario, we let the weights be a _ random sample _ with a symmetrical ( about its mean ) distribution and .moreover , we assume that the weights are _ independent _ from the data and we take the window constant to be equal to the mean of the random weights , i.e. , . taking together with the symmetry of the distribution of the weights , imply that , in the scenario of method ii , we have , where is as defined in ( ii ) of method i. using the randomized pivot , as in ( [ eq 2 ] ) , and generating its associated random weights s according to either method i or method ii , can result in a significant refinement in inferring about .the reason for this claim is given in section [ error of convergence ] . in spite of the higher accuracy of , provided by both method i and method ii , we emphasize that the former is more desirable .this is so since , in both univariate and multivariate cases , method i yields randomized confidence regions for whose volumes shrink to zero as the sample size increases to infinity ( see ( [ eq 9 ] ) and appendix i ) .method ii , on the other hand , fails to yield shrinking confidence regions .in fact , choosing the weights s for the pivot under the scenario of method ii , yields confidence regions for whose volumes , as , approach a limiting distribution rather than vanishing ( see ( [ eq 10 ] ) and table 3 below ) .[ remark 1 ] the term in the denominator of , as in ( [ eq 2 ] ) , under both methods i and ii , can , equivalently , be replaced by . in the above description of different weights in methods i and ii , we excluded the case when the weights s have a skewed distribution and .this case was omitted since , in general , it does not necessarily provide a refinement in the clt for the resulting randomized pivot , nor does it result in confidence regions whose volumes shrink to zero , as the sample size increases .the main idea behind methods i and ii is to transform the classical pivot , as in ( [ eq 1 ] ) , to , as in ( [ eq 2 ] ) , that has a smaller _ skewness_. to further develop the idea , we first note that is governed by the joint distribution of the data and the weights s . in view of this observation , we let stand for the joint distribution of the data and the wights , and we represent its associated mean by .recalling now that in both method i and method ii the weights are independent from the data , we conclude that and , consequently , .+ now observe that in view of the preceding observation , we now obtain the skewness of the random variables , under both methods i and ii , as follows : the second term of the product on the r.h.s . of ( [ eq 2 ] ) , i.e. , , is the skewness of the original data .the closer it is to zero the nearer the sampling distribution of , as defined in ( [ eq 1 ] ) , will be to the standard normal .however , one usually has no control over the skewness of the original data .the idea in methods i and ii is to incorporate the random weights s and to appropriately choose a window constant in such a way that is arbitrarily small .this , in view of ( [ eq 2 ] ) , will result in smaller skewness of the random variables ( see also appendix ii for the effect of the skewness reduction methods on vector valued data ) .the latter property , in turn , under appropriate conditions , can result in the second order accuracy of the clts for , as defined in ( [ eq 2 ] ) , under both methods i and ii . the accuracy of is to be discussed later in this section in the univariate case and , in section [ multivariate pivots ] in the multivariate case . in view of ( [ eq 2 ] ), it is now easy to appreciate that when is chosen in such a way that , then the skewness of will be exactly zero .the latter case can happen under method i when the distribution of the s is skewed and the cubic equation has at least one real root and is taken to be one of these real roots .the other way to make equal to zero is when the weights s have a symmetrical distribution and , i.e. , method ii .however , when method ii is used to construct , having an that is exactly zero , as it was already mentioned in section [ main results ] , will come at the expense of having confidence regions for whose volumes do not vanish ( see section [ length of the confidence intervals for mu ] and appendix i ) .we use edgeworth expansions to illustrate the higher accuracy of the clt for the randomized pivot , as in ( [ eq 2 ] ) , under methods i and ii , as compared to that of the classical clt for the pivot , as in ( [ eq 1 ] ) .edgeworth expansions are used in our reasoning below since they provide a direct link between the skewness of a pivotal quantity and the error admitted by its clt . in order to state the edgeworth expansion for the sampling distribution , for all , we first define also , we consider arbitrary positive and , and we let be so that , where stands for the standard normal distribution function . in view of the above setup , we now write the following approximation . under the assumption , from baum and katz , we conclude that , as , by virtue of this result , we conclude that replacing by produces an error that approaches zero at the rate , as .combining now the preceding conclusion with ( [ eq 2 ] ) and letting , we arrive at the preceding relation implies the asymptotic equivalence of up to an error of order . in view of this equivalence and also recalling that in both methods i and ii the weights have a finite third moment , we write a one - term edgeworth expansion for , , as follows : where is the density function of the standard normal distribution and . under the condition ,the following ( [ eq 5 ] ) and ( [ eq 6 ] ) are the respective counterparts of the approximations ( [ eq 3 ] ) and ( [ eq 4 ] ) for the classical , as in ( [ eq 1 ] ) , and they read as follows : where and a comparison between the expansions ( [ eq 6 ] ) and ( [ eq 4 ] ) shows how incorporating the weights s and their associated window , as specified in methods i and ii , results in values of which are closer to the standard normal distribution than those of . more precisely , under methods i and ii , having an , such that is small or negligible , results in smaller or negligible values of the skewness of , as defined in ( [ eq 2 ] ) . the latter reduction of the skewness , when is negligible , by virtue of ( [ eq 3 ] ) and ( [ eq 4 ] ) , yields a one - term edgeworth expansion for the sampling distribution of whose magnitude of error is rather than .on the other hand , in view of ( [ eq 5 ] ) and ( [ eq 6 ] ) , the rate of convergence of the clt for the classical , as in ( [ eq 1 ] ) , is of order . in order to further elaborate on the refinement provided by the skewness reduction approach provided by methodsi and ii above , we now assume that the data and the weights both have a finite fourth moment .in addition to the latter assumption , we also assume that the data satisfy cramr s condition that .cramr s condition is required for the sampling distributions and to admit two - term edgeworth expansions .it is noteworthy that typical examples of distributions for which cramr s condition holds true are those with a proper density ( cf .hall ) . once again here , replacing by , as in ( [ eq 2 ] ) and ( [ eq 2 + 1 ] ) , generates the error term , where is an arbitrary small positive constant . in view of our moment assumption at this stage , , from baum and katz we conclude that , as hence , replacing by generates an error of order . by virtue of the latter conclusion , an argument similar to the one used to derive ( [ eq 3 ] ) , yields also , a similar argument to ( [ eq 5 ] ) yields where is as defined in ( [ eq 4 + 1 ] ) .the approximation result ( [ eq 3 + 1 ] ) implies that and are equivalent up to an error of order and ( [ eq 3 + 2 ] ) yields the same conclusion for and . by virtue of the latter two equivalences , we now write two - term edgeworth expansions for and , , as follows : where is as in ( [ eq 4 ] ) , and . as to , it admits the following two - term edgeworth expansion . in view of ( [ eq 7 ] ) , and also ( [ eq 3 + 1 ] ) , when the data and the weights have four moments and the data satisfy cramr s condition , we conclude that for both methods i and ii , when is small , the clt for becomes more accurate .in particular , when is negligible then the clt for is second order accurate , i.e. , of order .in contrast , by virtue of ( [ eq 7 + 1 ] ) , and also ( [ eq 3 + 2 ] ) , one can readily see that , under the same conditions for the data , the clt for is only first order accurate , i.e. , of order .in view of methods i and ii , we are now to put the refinement provided by the randomized pivots , as in ( [ eq 2 ] ) , to use by constructing more accurate confidence intervals for the population mean , in the case of univariate data . in this sectionwe also study the length of these confidence intervals .the use of as a pivot results in asymptotic , , size confidence intervals for of the form : ,\ ] ] where and is the percentile of the standard normal distribution .we now examine the length of which is it is easy to see that , for when it is constructed by the means of method i , since , as , we have in other words , choosing the weights and their associated window constants in accordance with method i , to create the randomized pivot , as in ( [ eq 2 ] ) , results in confidence intervals for whose lengths approach zero , as the sample size increases . on the other hand , in the scenario of methodii we have .the latter choice of implies that , as , for all the preceding clt for the weights , in view of ( [ eq 9 ] ) , implies that , as where and is a standard normal random variable . [ page 14 ] in view of ( [ eq 10 ] ), the length of a confidence interval based on the pivot , when it is constructed in accordance with method ii , converges in distribution to a scaled inverse of a folded standard normal random variable rather than shrinking , while , as it was seen in section [ error of convergence ] , this method results in clts for that , under appropriate conditions , are second order accurate ( cf .( [ eq 3 + 1 ] ) , ( [ eq 7 ] ) and table 3 ) , recalling that in method ii , .in this section we present some numerical results to illustrate the refinement provided by the randomized confidence intervals , as in ( [ eq 8 ] ) , when the random weights and their associated window constants are chosen in accordance with methods i and ii .in addition to examining the accuracy in terms of empirical probabilities of coverage , here , we also address the length of the randomized confidence intervals . in our numerical studies in tables 1 - 3 below , we generate 1000 randomized confidence intervals as in ( [ eq 8 ] ) , with nominal size of , using the cut - off points therein , and 1000 classical -confidence intervals , based on the same generated data with the same nominal size and cut - off points . in tables 1 - 3 coverage( ) and length( ) stand , respectively , for the empirical probabilities of coverage and the empirical lengths of the generated confidence intervals .also , coverage( ) and length( ) stand , respectively , for the empirical probabilities of coverage and the empirical lengths of the generated -confidence intervals with nominal size 95% . in the following tables 1 - 2 , under the scenario of method i , we examine the higher accuracy provided by the randomized pivot , as in ( [ eq 2 ] ) , over the classical , as in ( [ eq 1 ] ) . [ table 1 ] [ table 2 ] from tables 1 and 2 , it is evident that the randomized pivots , as in ( [ eq 2 ] ) , when constructed according to method i , can significantly outperform , as in ( [ eq 1 ] ) , in terms of accuracy . in the following table 3we examine numerically the performance of when it is constructed based on method ii .[ table 3 ] note that in table 3 , as the sample size increases , the lengths of the confidence intervals , as in ( [ eq 8 ] ) with therein , that are constructed based on method ii , fluctuate rather than shrink ( see ( [ eq 10 ] ) ) .in this section we put the scenario of method i into perspective , and extend it to also include triangular weights .the idea here is to relate the size of the given sample to the random weights . in this section ,we let be a _ triangular _ array of random weights that is _ independent _ from the data .the random weights here , can _ either _ be an i.i.d .array of random variables with , _ or _ they can have a distribution with size , i.e. , we are now to introduce method i.1 , as a generalization of method i , that can yield asymptotically , in , srf s whose absolute values are small or negligible .+ + * _ method i.1 _ * : let be as above . choose a real valued constant in such a way that for given , so that can be arbitrary small or zero , + + ( i ) and + ( ii ) + + moreover , as , should also satisfy the following maximal negligibility condition .the clt in ( [ eq 13 ] ) is a consequence of the well known lindeberg - feller clt in a conditional sense .we further elaborate on the clt in ( [ eq 13 ] ) by noting that , in light of the dominated convergence theorem , ( [ eq 13 ] ) follows from the following conditional clt : + _ as , for all , ( [ eq 12 + 1 ] ) suffices to have _ it is noteworthy that a typical condition under which ( [ eq 12 + 1 ] ) holds true is when the identically distributed triangular weights s , for each , have a finite moment , where , and , for some positive constant .the validity of the latter claim can be investigated by an application of markov s inequality for , where is an arbitrary positive number . on taking , for example , in method i.1 , when the weights are distributed as in ( [ eq 14 + 1 ] ) ,the randomized pivot , as in ( [ eq 11 ] ) , assumes the following specific form : the window constant , in view of method i.1 , when the weights are as in ( [ eq 14 + 1 ] ) , was obtained from the following three steps : + step 1 : obtain the general form of in this case as follows : we note that the maximal negligibility condition ( [ eq 12 + 1 ] ) holds for the weights as in ( [ eq 14 + 1 ] ) .the latter is true since , in this case , we have , where is a positive number whose value is not specified here ( cf . the paragraph following remark [ remark 3 ] ) . by this , we conclude that , on taking , all the assumptions in method i.1 hold true for the weights , as in ( [ eq 14 + 1 ] ) . in the present context of weights , the confidence intervals for , based on the pivot , as in ( [ eq 15 ] ) , follow the general form ( [ eq 8 ] ) .however , the fact that here we have the constrain , enables us to specify ( [ eq 8 ] ) for in this context , as follows : random variables , of the form ( [ eq 14 + 1 ] ) , also appear in the area of the weighted bootstrap , also known as the generalized bootstrap ( cf . , for example , arenal - gutirrez and matrn , barbe and bertail , csrg _et al_. , mason and newton and references therein ) , where they represent the count of the number of times each observation is selected in a re - sampling with replacement from a given sample. motivated by this , somewhat remote , relation between the bootstrap and our randomized approach in method i.1 , when the weights are as in ( [ eq 14 + 1 ] ) , we are now to conduct a numerical comparison between the two methods . after some further elaborations on the weighted bootstrap, we present our numerical results in table 4 below . to explain the viewpoint of the weighted bootstrap , we first consider a bootstrap sample that is drawn with replacement from the original sample .observe now that for the bootstrap sample mean we have where , for each , , is the count of the number of times the index of was selected .it is easy to observe that the weights are distributed , as in ( [ eq 14 + 1 ] ) , and they are independent from the data .to conduct our numerical comparisons , we consider the bootstrap -confidence intervals ( cf . efron and tibshirani ) that are generally known to be efficient of the second order in probability- ( cf ., for example , hall , shao and tu and singh ) . to construct a bootstrap -confidence interval for the population mean , first a large number , say , of independent bootstrap samples of size are drawn from the original data .let us represent them by , where .the bootstrap version of , as in ( [ eq 1 ] ) , is computed for each one of these bootstrap sub - samples to have , where is the bootstrap sample variance and s are as in ( [ eq 14 + 1 ] ) .these bootstrap -statistics are then sorted in ascending order to have \leq \ldots \leq t_{n}^{*}[b]$ ] .when , for example , , a bootstrap -confidence interval for with the nominal size is constructed by setting : for the same nominal size of 95% , we are now to compare the performance of the randomized confidence interval , as in ( [ eq 16 + 1 ] ) , to that of the bootstrap -confidence interval , in table 4 below . in table 4 , we generate 1000 confidence intervals .to do so , we use 1000 replications of the data sets , and the weights , as in ( [ eq 14 + 1 ] ) . for each one of the generated data sets , based on bootstrap samples , we also generate 1000 bootstrap -confidence intervals , with nominal size of 95% .similarly to our setups for tables 1 - 3 , in table 4 , we let coverage( ) and length( ) stand for the empirical coverage probabilities and the empirical lengths of the therein generated randomized confidence intervals . also , in table 4 , we let coverage( ) and length( ) stand for the empirical probabilities of coverage and the empirical lengths of the bootstrap confidence intervals .the relatively close performance , in terms of accuracy , of the bootstrap -confidence intervals with bootstrap samples , and the randomized pivot , as in ( [ eq 15 ] ) , in table 4 is interesting .further refinements to the randomization approach method i.1 that results in randomized pivots that can outperform , in terms of accuracy , method i.1 are presented in method i.2 in subsection [ fixed sample approach ] below .it is worth noting that the class of random weights ( [ eq 14 ] ) is far richer than the particular form ( [ eq 14 + 1 ] ) .our focus on the latter was mainly the result of its application in the area of the weighted bootstrap .clearly different choices of the size and/or in ( [ eq 14 ] ) yield different randomizing weights .[ comparison to the bootstrap ] the use of the randomized pivots introduced in this paper to construct confidence intervals for the mean by no means is computationally intensive , while the bootstrap is a computationally demanding method .also , using the randomization methods discussed in this paper , one does not have to deal with the problem of how large the number of bootstrap replications , should be .moreover , the error reduction methods introduced in this paper enable one to easily trace down the effect of the randomization on the length of the confidence intervals in the univariate case , and the volume of the randomized confidence rectangles when the data are multivariate ( cf .( [ eq 8 ] ) , section [ multivariate pivots ] and appendix i ) .it is also worth noting that the randomization framework allows regulating the error of an inference by choosing a desired value for the srf .this can be done by choosing the random weights from a virtually unlimited class , as characterized in the above method i , method ii , method i.1 and also method i.2 below .the approach discussed in method i.1 considers triangular random weighs , to tie the random weights to the sample size , and chooses the window constant therein in such a way that it makes the absolute value of the srf arbitrarily small , in the limit . here, we also consider the triangular random weights as described at the beginning of this section and introduce a method to increase the accuracy of the clt based inferences about the mean for fixed sample sizes. for each fixed sample size , the following method i.2 yields a further sharpening of the asymptotic refinement provided by method i.1 and it reads as follows : + + * _ method i.2 _ * : let the weights s be as described right above method i.1 . if for a given , so that can be arbitrary small or zero , there exist a real value so that for the weights s , we have + , + and + + then , for each , choose a real valued constant in such a way that it satisfies the following conditions ( iv ) and ( v ) .+ + , + .+ + the viewpoint in method i.2 , in principle , requires choosing different for different sample sizes , for a given . also , it is not difficult to see that method i.1 is the asymptotic version of method i.2 .we note that , for each fixed and given , when is small , method i.2 and its associated pivots , as in ( [ eq 12 ] ) , yield higher accuracy than those that result from the use of method i.1 and its associated pivots , as in ( [ eq 11 ] ) .this is true since , in method i.2 , the window constants are tailored for each fixed to make .this is in contrast to the viewpoint of method i.1 in which the therein defined skewness reducing factor assumes the given value in the limit . despite their differences in the context of finite samples ,both method i.1 and method i.2 yield randomized pivots , as in ( [ eq 11 ] ) and ( [ eq 12 ] ) , that can outperform their classical counterpart , as in ( [ eq 1 ] ) , in terms of accuracy ( see tables 4 above and also tables 5 and 6 below ) . under the scenario of method i.2 , the confidence intervals for based on the randomized pivots , also admit the general form ( [ eq 8 ] ) , only with in place of and in place of therein . hence ,in the following numerical studies we denote them by . in order to illustrate the refinement provided by method i.2 , we consider random samples of sizes and from the heavily skewed lognormal(0,1 ) .we also consider distributed weights as in ( [ eq 14 + 1 ] ) .choosing the random weights here to be distributed , as in ( [ eq 14 + 1 ] ) is so that the numerical results in tables 5 and 6 below should be comparable to their counterparts in table 4 above where the data have a lognormal(0,1 ) distribution . on taking in method i.2 , we saw in subsection [ multinomially weighted pivots ] , that for we have + and + .+ recall that for the weights , as in ( [ eq 14 + 1 ] ) , the general form of was already derived in ( [ eq 16 ] ) . in view of the latter result ,it is easy to check that when , on taking we have . also , for , taking yields .consider now and , the confidence intervals for of nominal size based on method i.2 and samples of size and , which result , respectively , from setting : in the following tables 5 and 6 we generate 1000 replications of lognormal(0,1 ) data and weights , as in ( [ eq 14 + 1 ] ) , for and .we let coverage( ) and coverage( ) stand for the respective empirical probabilities of coverage of and .we also let length( ) and length( ) stand for the respective empirical lengths of and . [ [ appendix - i - asymptotically - exact - size - randomized - confidence - rectangles ] ] appendix i : asymptotically exact size randomized confidence rectangles ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in the case of multivariate data , the effect of the randomization methods discussed in section [ multivariate pivots ] , on the volume of the resulting randomized ( hyper ) confidence rectangles can be studied by looking at the marginal confidence intervals for each component of the mean vector . to further elaborate on the idea , for simplicitywe restrict our attention to two dimensional data as the idea is the same for data with higher dimensions .furthermore , here , we borrow the notation used in section [ multivariate pivots ] , and note that we first consider the randomization approach of method i. the effect of the other randomization methods on the volume of the resulting randomized confidence rectangles are to be addressed later on .consider the i.i.d .bivariate data , , with mean .furthermore , for ease of notation , let , where , as defined in ( [ eq 20 ] ) with , is the sample covariance matrix .the randomized version of the confidence rectangle ( [ classical confidence rectangle ] ) for , in view of method i , and based on the randomized pivot , as defined in ( [ eq 19 ] ) , is of the following form : hence , similarly to the univariate case , in case of multidimensional data , under the conditions of section [ multivariate pivots ] , in view of method i , as , we have . in other words , method i yields randomized confidence regions for the mean vector , that shrink as the sample size increases .we remark that , in the multivariate case , methods i.1 and i.2 also yield randomized confidence rectangles of the form ( [ randomized confidence rectangle ] ) , with the notation therein replaced by , that shrink as the sample size increases . a similar argument to the one used to derive ( [ eq 10 ] ) shows that the latter conclusion concerning the shrinkage of the randomized confidence regions , in view of methods i , i.1 and i.2 , does not hold true when the randomized pivot is constructed using method ii . [ [ appendix - ii - the - effect - of - method - i - on - mardias - measure - of - skewness ] ] appendix ii : the effect of method i on mardia s measure of skewness ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a number of definitions for the concept of skewness of multivariate data can be found in the literature when the assumption of normality is dropped .mardia s characteristics of skewness for multivariate data , cf .mardia , is , perhaps , the most popular in the literature .this measures of skewness is valid when the covariance matrix of the distribution is nonsingular . for further discussions and developments on mardia s skewness and kurtosis characteristics , we refer to kollo and references therein .the following reasoning shows how small values of , as in method i , result in smaller values for mardia s measure of skewness for the randomized vectors as compared to that of . where and are i.i.d . with respect to the joint distribution .the preceding relation shows that employing method i enables one to make mardia s characteristic of skewness arbitrarily small .sepanski , s. j. ( 1994 ) .asymptotics for multivariate t - statistic for random vectors in the generalized domain of attraction of the multivariate normal law ._ journal of multivaraite analysis _ , * 49 * , 41 - 54 .shao , q. m. ( 2005 ) .an explicit berry - essen bound for student s -statistic via stein s method . in _ stein s method and applications , lect .notes ser .singap . _ * 5 * , 143 - 155 .singapore university press , singapore .
|
in this paper we present randomization methods to enhance the accuracy of the central limit theorem ( clt ) based inferences about the population mean . we introduce a broad class of randomized versions of the student -statistic , the classical pivot for , that continue to possess the pivotal property for and their skewness can be made arbitrarily small , for each fixed sample size . consequently , these randomized pivots admit clts with smaller errors . the randomization framework in this paper also provides an explicit relation between the precision of the clts for the randomized pivots and the volume of their associated confidence regions for the mean for both univariate and multivariate data . this property allows regulating the trade - off between the accuracy and the volume of the randomized confidence regions discussed in this paper .
|
the origin and evolution of spiral arms in disk galaxies is a fundamental problem in astrophysics .the classical theory on spiral arm dynamics is the lin - shu model .the lin - shu model postulates a quasi - stationary standing wave pattern that rotates around the galactic center with a constant pattern speed . in this model , spirals are long - lived and the so - called winding problem is avoidable .however , the wave packet evolves with the group velocity , and it is finally absorbed at the lindbrad resonances .thus , to maintain the density wave it requires some generating mechanisms such as waser or -barrier .with the recent progress of -body simulations of spiral galaxies , the new picture of spiral arm formation and evolution was proposed .contrast to the lin - shu model , the spiral arms in -body simulations are not stationary but transient and recurrent , appearing and disappearing continuously .the basic process in this activity is so - called swing amplification . in a differentially rotating disk ,a leading wave rotates to a trailing one because of the differential rotation .if toomre s is , the amplitude of the rotating wave is enhanced by the self - gravity . with a perturber such as the corotating over - dense region ,the stationary wave patterns are excited by the swing amplification .even if there are no explicitly corotating perturbers , the small leading wave always exists since a disk consists of a finite number of stars .thus , without an explicit perturber , the trailing wave can grow spontaneously due to the swing amplification mechanism .we have been exploring the role of the swing amplification in the spiral arm formation and evolution in detail . investigated the pitch angle of spiral arms using local -body simulations .they found that the pitch angle decreases with the shear rate .this is consistent with the results of the global -body simulations .based on the linear theory of the swing amplification , they obtained the pitch angle and found it agrees with that obtained through -body simulations . extended the previous study and investigated the radial and azimuthal wavelengths and the amplitude of spiral arms using -body simulations .they found that the dependencies of these quantities on the shear rate or the epicycle frequency agree well with those according to the linear theory of the swing amplification .these quantitative results indicate that the swing amplification surely plays an important role in the spiral arm formation and evolution .the -body simulations that support the swing amplification mechanisms show the formation of the multi - arm spirals .thus we should be careful to apply the swing amplification mechanism to the grand - design spirals .in addition , the swing amplification model is constructed based on the local approximation .therefore , strictly speaking , the swing amplification mechanisms is not directly applicable to the global structure .instead , we can apply it to the multi - arm spirals or flocculent spirals .however , it has been suggested that the short - term activities of the grand - design spirals in barred galaxies may be explained by the swing amplification .a further study is necessary to clarify the role of the swing amplification in the global structure .however , since the swing amplification itself is a general and fundamental mechanism in the various types of disks , understanding a physical process of the swing amplification is important .for example , the origin of the short - scale spiral arms in saturn s ring , so - called self - gravity wakes , may be formed by the swing amplification ( e.g. , * ? ? ?* ; * ? ? ?the physical process of the swing amplification is complicated because it relates with three fundamental elements , the self - gravity , the shear , and the epicycle oscillation . to shed light on the physical process of the swing amplification, introduced the simple model of the swing amplification .this model is similar to the model proposed by except for the treatment of the velocity dispersion . to introduce the effect of the velocity dispersion of stars, he used the reduction factor instead of the gas pressure term .hereafter we refer to this model as the glbt model .he posited that the motion of a particle can be described by the simple oscillation equation with the variable frequency and argued that his model gives the same result as that of the rigorous model based on the collisionless boltzmann equation by ( hereafter refereed to as jt model ) .while the basic equation in the jt model is complicated , the glbt model is simple and its dynamical behavior is easy to understand . using the glbt model , the swing amplification has been explained in some review papers . at first glance , it seems that the glbt model was derived using the equation of motion of a single particle in the rotational frame .however , strictly speaking , it is impossible to derive the basic equation directly .as shown below , instead of the equation of motion of a single particle , we should begin with the lagrange description of the hydrodynamic equation .furthermore , the original numerical calculation method was ambiguous .the reduction factor was used for introducing the effect of the velocity dispersion , but no details of its treatment were given .in some subsequent review papers , the amplification process was explained based on the glbt model , but the derivation of the basic equation and the detailed numerical treatment were not described there .we show that the additional assumption on the vorticity perturbation is necessary for deriving the basic equation and the naive numerical treatment leads to breakdown of the model in the strong shear case such as a keplerian rotation . compared the amplification factor by the glbt model with that by the jt model and found that the dependence of the amplification factor on the azimuthal wavelength has a similar tendency for the flat rotation curve .however , the comparison with the general shear rate was not performed . and performed the similar analyses with the general shear rate .however , they did not compare them with the jt model .furthermore , in all previous works , the wavelength and the pitch angle for the maximum amplification were not investigated in detail .thus , we investigate the detailed dependence of the amplification factor on the epicycle frequency , the pitch angle , and wavelengths and compare them with those in the jt model .this work is the first comprehensive comparison between jt and glbt models .in addition , we clarify the dynamical behavior of the solution in detail and find the synchronization of the epicycle phase that was not pointed out in the previous works explicitly .the outline of this paper is as follows .section 2 examines the basic equation of the glbt model . in section 3 , we solve the equation numerically and investigate the most amplified wave . in section 4, we derive the pitch angle formula by the order - of - magnitude estimate .section 5 is devoted to a summary .we revisit the glbt model of the swing amplification proposed by . in this model ,the evolution of the spiral amplitude is described by the spring dynamics with the variable spring rate .however , he did not describe the complete derivation of the basic equation .thus we describe the derivation in detail .we consider a small thin region of a galaxy and introduce a local rotating cartesian coordinate .the -axis is directed radially outward , the -axis is parallel to the direction of rotation , and the -axis is normal to the - plane .we investigate the evolution of a rotating wave in the lagrangian description .a particle located at at the initial time moves to at time . in the unperturbed state ,the surface density of particles is uniform .the unperturbed position of the particle at time is . since in the unperturbed statethe self - gravity parallel to the plane vanishes , the equation of motion is where is the circular frequency , and is the oort constant .the derivation is the lagrangian derivative with respect to , which means the time derivative with and fixed .we assume that the unperturbed solution is a circular orbit and is described by we consider the perturbation due to the displacement .the perturbed displacement generally depends on the position and the time as .the particle located at in the unperturbed state moves to the position , then the density changes due to the displacement . considering the mass conservation , the surface density described by the jacobian determinant if the displacement is sufficiently small , neglecting the higher order terms , we rewrite the density perturbation as we consider a single plane wave . at assume that the wave phase is where and are the radial and azimuthal wavenumbers at . using and , we can rewrite the phase as which indicates that the wave number varies with time as where we define by .we assume the sinusoidal wave with the amplitudes and the poisson equation of the gravitational potential is where is the gravitational potential , is the density .the gravitational acceleration vector is where is the gravitational acceleration and .the solution of equation ( [ eq : poi ] ) with the thin disk approximation is where and are the amplitudes of the gravitational potential perturbation and the surface density perturbation , respectively , and .using equations ( [ eq : dendiv ] ) , ( [ eq : gvec ] ) , and ( [ eq : poisol ] ) , we obtain where is the displacement normal to the wave ( figure [ fig : geofig ] ) negative and positive corresponds to the leading and trailing waves , respectively .if the wave is trailing , relates to the pitch angle by . introducing the amplitude of the gravity and the normal displacement , we obtain the amplitude relations from equations ( [ eq : geq ] ) and ( [ eq : xieq_first ] ) , hereafter we omit the indication of the independent variables for each function if they are obvious .the displacement obeys the equation of motion the following relation is always satisfied thus , we have we can obtain equations of the -component in a similar way . using these relations and substituting , into equations ( [ eq : eomlagx ] ) and ( [ eq : eomlagy ] ), we obtain the amplitude equations eliminating in equations ( [ eq : pteomx ] ) and ( [ eq : pteomy ] ) , we get using , we rewrite the equation as introducing the integral constant , we obtain where is the oort constant .this constant originates from the circulation theorems , which is proportional to the vorticity perturbation ( see appendix [ ap : vorticity ] for details ) .using the variables and and eliminating by equation ( [ eq : xiadef ] ) , we rewrite equations ( [ eq : pteomx ] ) and ( [ eq : pteomy ] ) as where we used .similarly , we rewrite equation ( [ eq : cons ] ) as substituting equations ( [ eq : gdef ] ) and ( [ eq : cons2 ] ) into equation ( [ eq : eomx2 ] ) , we obtain where is the epicycle frequency and we used .equation ( [ eq : finalxi ] ) with is the same as equations ( 12 ) and ( 13 ) in .figure [ fig : vorticity ] shows the term in proportion to in equation ( [ eq : finalxi ] ) . when , the vorticity term can be large for small . in all the previous works , was assumed explicitly or implicitly .following the previous works , we focus only on the case with . the condition is not always satisfied since it depends on the initial condition of the velocity perturbation .the choice of means that we focus on the specified perturbation without the vorticity perturbation .we summarize the amplitude equation without the effect of the velocity dispersion discussed so far .the time evolution of is described by the equation of the oscillation with the time variable frequency where where is the normalized squared frequency , which also can be interpreted as the spring rate of the system .while remains constant , evolves with time as .thus , also changes with time .the frequency corresponds to the frequency of the oscillation without the self - gravity .equations ( [ eq : xieq ] ) and ( [ eq : sp ] ) do not include the effect of the stellar velocity dispersion , which reduces the effect of self - gravity . the hydrodynamic model gives the similar equation : where is the sound velocity .the term comes from the gas pressure . introduced the effect of the stellar velocity dispersion by using the reduction factor where is the integral variable , and are where is the wave frequency in the inertial frame , is the number of spiral arms , and is the radial velocity dispersion ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the properties of the reduction factor that are necessary in the following discussion are summarized in appendix [ ap : red ] .the reduction factor means the degree of the reduction of the response of the disk to a perturbation due to the velocity dispersion . in terms of the physical meaning ,the reduction factor should be .strictly speaking , introducing in this way was not justified by the collisionless boltzmann equation , which gives the integral equation .the validity of using is arguable because was derived under the assumption of a tightly wound wave .however , as shown by this simple approach gives the similar results as those of the rigorous analysis by the collisionless boltzmann equation .using and , we rewrite equation ( [ eq : spring0 ] ) as and equation ( [ eq : chi_def ] ) as where is toomre s .note that is in . to calculate , we need to specify . since is the wave frequency in the rotating frame , the equation is satisfied .then , is given by the solution of the following equation as shown in appendix [ ap : exist ] , for , equation ( [ eq : spring ] ) has two real solutions and , which satisfy the inequality . from equation ( [ eq : sp ] ) ,the effect of the self - gravity reduces the frequency .thus , we take .we can numerically obtain by the bisection method or the relaxation method . in general ,the relaxation method is faster than the bisection method , but its convergence criterion is not trivial .thus , in this paper , we adopt the bisection method , which assures the convergence .for and small , equation ( [ eq : spring ] ) does not have a real solution , where the negative reduction factor may arise from the breakdown of the tight - winding approximation .hence we introduce the lower bound to avoid the negative reduction factor and define the modified reduction factor as using this we define as equation ( [ eq : springnew ] ) always has a real solution that is less than or equal to ( see appendix [ ap : exist ] ) .figure [ fig : springg10q15 ] shows as a function of , where and .we obtain by solving equation ( [ eq : springnew ] ) keeping the error less than . if is not large , is less than .the rotations of the local frame and the wave are in the opposite directions .thus , when the angular velocity of the wave rotation is comparable to the circular velocity of the local frame , the comoving frame with the wave barely rotates against the inertial frame , where the rotational effects such as the coriolis force weakens .therefore , the oscillation frequency becomes small .if is sufficiently large , is negligible . in this case, the comoving frame with the wave is approximately the same as the local rotational frame .then , without the self - gravity , we merely observe the usual epicycle motion with .if we consider the self - gravity , becomes small .when is not large , is negative . then , the wave is amplified .we evaluate as a function of and ( figure [ fig : kxkydepa ] ) .the wavenumbers and are related to and as and where is the critical wavenumber of the gravitational instability .we can show from equation ( [ eq : springnew ] ) that is symmetric about the axes .thus we show the region where and . for and , the large area where exists . in this parameter regime, the amplitude of the wave grows exponentially .as and increase , approaches unity , that is , the motion for large wavenumbers is described by the epicycle oscillation . for ,if or is sufficiently large , the oscillation is stable with any . in order for the wave to be amplified extensively ,it is necessary that should be a moderate value . as increases, the area shrinks .this is because the self - gravity is suppressed by the velocity dispersion .similarly , as increases , the area shrinks .the larger epicycle frequency may lead to weaker amplification in general .this is consistent with the jt model for . in the jt modelthe amplification factor decreases with for .[ cols="^,^,^ " , ] we solve equation ( [ eq : xieq ] ) with equation ( [ eq : springnew ] ) by the 4th - order runge - kutta method .we define the osculating amplitude and phase as with which is described as figure [ fig : xi_d_e ] shows the evolution of for , , and . for , approximated by .thus , approximately oscillates with period , and and are almost constant . for , decreases and becomes negative , where increases exponentially . during the amplification, changes . for , oscillates with period again , and its amplitude is larger than that in the initial state .we also calculate with a different initial condition .while the phase after the amplification is same , the final amplitude is different .the amplification factor depends on the initial condition .we consider the dependence of the amplification factor on the initial condition .we define and as the amplitude and phase at , and as those for , and the amplification factor as . in general, the maximum value of is larger than .the wave is most amplified to at the first peak of the trailing wave .we define the maximum amplification factor as .figure [ fig : phase_amp ] shows the amplification factors and , the resulting phase and the pitch angle of the trailing wave as a function of the initial phase .the resulting phase has approximately two values , and , whose difference is about , half an oscillation .we find that the resulting phase is synchronized with the two discrete values independently of .we calculate the peak time when becomes the maximum , and calculate the pitch angle from the bottom panel of figure [ fig : phase_amp ] shows the pitch angle .the pitch angle does not depend on , which is about .the synchronization of means that the peak time is independent of and thus the pitch angle does not depend on the initial condition .thus , the pitch angle is a function of only , , and .the pitch angle is negative in and .figure [ fig : xi_d_ez ] shows the time evolution of for . in this case, the amplification does not occur , and the final amplitude is smaller than the initial one .thus , has a maximum value at the negative time .the corresponding pitch angle is negative .the amplification factor depends on . at or ,the amplification factor has the maximum values and . on the other hand , at or , the amplification factors are very small and the final amplitude is smaller than the initial one . explored the dependence of the amplification factor on the normalized azimuthal wavelength .we reproduce fig . 7 in as shown in figure [ fig : toomrefig7 ] .the same figure is also found in the review papers .the amplification factor calculated here is similar but slightly larger than that in .we can not explain the difference completely since the detailed calculation method is not described in . in his calculationthe initial phase might be or , that is , the wave form for is or . in this paper ,the initial phase is optimized to maximize the amplification factor .thus , our amplification factor may be larger .the amplification factor depends on .the peak amplification factors are and for and , respectively .though the maximum amplification factor depends on sensitively , the optimized barely depends on .the amplification factor becomes large for .the optimized also depends on .we calculate the maximum amplification factor by optimizing for a disk with and .we define optimized as .then , we calculate the pitch angle from the time when , and evaluate the corresponding radial wavelength from we compare these quantities with those in the jt model . in the jt model , , , , and are given as figure [ fig : pitchall ] plots , , , and against .the overall tendency of the dependencies on agrees with those by the jt model .the pitch angle increases with , which is consistent with the jt model .the pitch angle agrees well with for , while for , is larger than .the dependence of on is similar to . while for , the dependence on is weak , for , increases with .however , is smaller than .the azimuthal wavelength decreases with and increases with . though the general trend is consistent with the jt model, is smaller than .especially , for there is a large difference .the behavior of is in good agreement with ( equation ( [ eq : fit_amp ] ) ) , but its value is larger .the jt model is more rigorous than the glbt model but is not easy to understand the wave dynamics from the evolution equation .the glbt model is less rigorous because of the intuitive introducing of the reduction factor that is valid for the tight winding approximation .however it gives us an insight into a nature of the swing amplification .these two models compliment one another . in the glbt model, we found that the final oscillation phase is independent of the initial oscillation phase .this means that the oscillation phase is synchronized during the amplification .the essence of swing amplification is the phase synchronization of the epicycle motion . obtained the pitch angle by -body simulation and linear analyses ( jt model ) however , the physical interpretation has not yet been presented . based on the analyses in this paper ,we describe the physical interpretation of the pitch angle formula .we describe the swing amplification in terms of the phase synchronization of the epicycle motion .we consider a single leading wave in a rotating frame .the wave rotates from leading to trailing due to the shear with the angular speed ( e.g. * ? ? ?* ; * ? ? ?* ) , while the rotating frame rotates in the opposite direction to the wave rotation with in the inertial frame .if the wave is tightly wound , that is , the pitch angle is small , the angular speed is very small .then the effects of the galactocentric rotation such as the coriolis and tidal forces prevent the wave from the amplification due to the self - gravity .when , the rotations of the wave and the rotating frame are canceled out in the inertial frame .this happens when , which roughly means that is large such as .then the stabilizing effects of the galactocentric rotation weaken and the self - gravity becomes relatively stronger .therefore , particles are pulled toward the direction normal to the wave and their phases of the epicycle motion are synchronized , and consequently the wave density is amplified .after the wave density reaches the maximum , it starts to decline . at the same time , the density of the other leading wave starts to grow , which reaches the maximum quickly .this activity continues successively .thus if we observe an entire disk , we expect that the dominant wave corresponds to the wave with the maximum amplification factor approximately .this hypothesis is supported by -body simulations .they investigated the wave with the maximum amplification factor from the linear analysis and compared it with the dominant wave by -body simulations .they confirmed that the time - averaged quantities of the dominant wave in -body simulations are consistent with those of the waves with the maximum amplification factor by the linear analysis .therefore we assume that the observed spiral arms correspond to the wave with the maximum amplification factor .the wave pitch angle decreases with time due to the shear as where is the time elapsed since ( e.g. , * ? ? ?* ; * ? ? ?the observed pitch angle corresponds to the angle when the wave density reaches maximum after the epicycle phase is synchronized .roughly speaking , the synchronization starts when .the density maximization occurs on the timescale of the epicycle period .thus , substituting into , we obtain the pitch angle formula that is the same as equation ( [ eq : fit_pitch2 ] ) except for the numerical factor and dependence .we considered the glbt model introduced by .the formulation of the glbt model is similar to that in except for the treatment of the gas pressure term .we investigated the derivation and calculation procedure in detail . to derive the basic equation, we need to assume that the constant of motion vanishes , which means that we focus on the initial perturbation without the vorticity perturbation .we found that the glbt model has the singularity and can not be applied to the case where .to avoid this singularity , we introduced the lower bound in the reduction factor .we calculated the maximum amplification factor and the corresponding wavelengths and compared them with those in the jt model .the overall dependence on in the glbt model is similar to that in the jt model .however they are slightly different from those in the jt model . in applying to the interpretation of numerical simulations or observations, we should use the glbt model carefully .it seems that the jt model is more reliable because it was derived in a more rigorous manner .we have already confirmed that those in the jt model are in good agreement with those in -body simulations .regardless of this drawback , the glbt model is attractive because its basic equation is simple and gives us an insight into a nature of the swing amplification . using the glbt model, we found the synchronization phenomenon .the oscillation phase after the amplification is independent of the initial oscillation phase .this is because the oscillation phase is synchronized during the amplification .this may be the key process to understand the swing amplification .based on the phase synchronization , we derive the pitch angle formula by the order - of - magnitude discussion .however , this process has not yet been confirmed by -body simulations . in the next paper, we will investigate the particle dynamics in spiral arms using -body simulations .we investigated the elementary process of the swing amplification .however , in order to understand the overall spiral arm formation we should investigate the origin of the leading waves . in the swing amplification ,we postulate the existence of the strong leading waves . if the leading waves come only from the particle noises , they are too small to account for the amplitudes of spiral arms .another mechanism is necessary to generate the strong leading waves .one possible mechanism is the nonlinear wave - wave interaction .however , the role of the nonlinear effect in the generation of the leading wave is poorly understood . in the future study, we will investigate this problem .28 natexlab#1#1 , e. 1984 , , 114 , 319 , j. 2015 , , 454 , 2954 , j. , asaki , y. , makino , j. , miyoshi , m. , saitoh , t. r. , & wada , k. 2009 , , 706 , 471 , j. , saitoh , t. r. , & wada , k. 2013 , , 763 , 46 , g. , lin , c. c. , lowe , s. a. , & thurstans , r. p. 1989, , 338 , 78 . 1989 , , 338 , 104 , j. & tremaine , s. 2008 , galactic dynamics ( 2nd ed . ; princeton , nj : princeton univ . press ) , c. & baba , j. 2014 , , 31 , 35 , b. , dettbarn , c. , & tsuchiya , t. 2005 , , 444 , 1 , m. s. , baba , j. , saitoh , t. r. , makino , j. , kokubo , e. , & wada , k. 2011 , , 730 , 109 , p. & lynden - bell , d. 1965 , , 130 , 125 , r. j. j. , kawata , d. , & cropper , m. 2013 , , 553 , a77 , w. h. & toomre , a. 1966 , , 146 , 810 , a. j. 1965 , c. c. & shu , f. h. 1964 , , 140 , 646 . 1966 , proceedings of the national academy of science , 55 , 229 , c. c. , yuan , c. , & shu , f. h. 1969 , , 155 , 721 , d. & kalnajs , a. j. 1972 , , 157 , 1 , j. w .- k .1976 , , 206 , 418 , s. , fujii , a. , kokubo , e. , & salo , h. 2015 , , 812 , 151 , s. & kokubo , e. 2014 , , 787 , 174 , s. & kokubo , e. 2016 , , 821 , 35 , h. 1995 , icarus , 117 , 287 , j. a. 2000 , , 272 , 31 . 2010 , arxiv e - prints , j. a. & carlberg , r. g. 1984 , , 282 , 61 , a. 1969 , , 158 , 899 . 1981 , structure and evolution of normal galaxies ( cambridge : cambridge univ . press ) , 111 , a. & kalnajs , a. j. 1991 , 341we show the physical meaning of the constant . using equations ( [ eq : unpqx ] ) , ( [ eq : unpqy ] ) , ( [ eq : xix ] ) and ( [ eq : xiy ] ), we calculate the velocity field via the displacement vector introducing the variables and and neglecting the higher - order terms , we obtain the -component of the vorticity in the inertial frame is given as where comes from the rotation of the coordinate system .we divide the vorticity by the surface density using equation ( [ eq : cons ] ) , we obtain hence is in proportion to the amplitude of the perturbation of . since there are no nonconservative forces in this two - dimensional system , the vorticity divided by the surface density moving with the wave remains constant with time .we briefly summarize the mathematical properties of the reduction factor that are necessary for numerical calculation . when , the integral form of the reduction factor is ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) when , is the pure imaginary number such as where is the real number , and equation ( [ eq : red ] ) is written as figure [ fig : redfuc ] shows as a function of .the necessary properties of that will be used in appendix [ ap : exist ] are the following : is continuous with respect to for ( appendix [ ap : red_domain ] ) , is a monotonically decreasing concave function with respect to ( appendix [ ap : mono ] ) , has the limit values and ( appendix [ ap : asymp ] ) , and is positive for with any ( appendix [ ap : red_pos ] ) .we give the proofs of these properties .we consider the case where .equation ( [ eq : red ] ) has the singularities when , that is , is an integer .we can show that equation ( [ eq : red ] ) with the limit and is finite .thus , they are the removable singularities at ( ) and ( ) . for ( ) ,the asymptotic form is where is a sufficiently small value .since the integral is not equal to zero , the singularity can not be removed .this is the essential singularity .therefore , diverges with the limit of ( ) . for , there are no singularities because the denominator is not equal to zero with .the reduction factor has the real finite value and is continuous for .if we allow the discontinuity and the singularities , we can define the reduction factor for the wider range of .however , the singular behavior seems to be unnatural .thus we consider only .the infinite series of the reduction factor is and its first derivative with respect to is where is the modified bessel function of the first kind . because all terms for in the infinite series are negative , the first derivative is negative .the second derivative is where the term with is zero .since we consider , we have for . thus all terms in the infinite series are negative , which means that the second derivative is also negative .therefore , for , the inequalities and are always satisfied .this indicates that is a monotonically decreasing concave function with for .we consider the behavior of with the limit .we rewrite the first term of equation ( [ eq : limf ] ) thus , we obtain .next we consider the behavior of with the limit . from equation ( [ eq : red_series1 ] ) , with the sufficiently large , is approximated by if , equation ( [ eq : inifnite ] ) is approximated as for arbitrary , the following relation exists differentiating twice with respect to and substituting , we obtain thus , for and equation ( [ eq : red_nega_inf ] ) becomes the reduction factor converges to unity as .similarly , we can prove the first derivative converges to zero as .as shown in figure [ fig : redfuc ] , it seems that is always positive for . from equation ( [ eq : red2 ] ), we obtain since the integrand in equation ( [ eq : red11 ] ) is positive , with is positive . considering that is a monotonically decreasing function of , we find that is positive for .in appendix [ ap : ex1 ] , we show that equation ( [ eq : spring ] ) has two real solutions , where one of them is smaller than if .if and is small , equation ( [ eq : spring ] ) has no real solutions .this is caused by the negative reduction factor .physically should be . in appendix[ ap : ex2 ] we show that equation ( [ eq : spring ] ) has a real solution regardless of if we introduce the lower bound of the reduction factor to avoid the negative value .we define as the function is continuous for because is continuous for ( appendix [ ap : red_domain ] ) .equation is the same as equation ( [ eq : spring ] ) . as shown in appendix[ ap : asymp ] , since the limit values of the reduction factor are and , we obtain the limit values of as and .thus , if where exists , has multiple real solutions because of the continuity .the first and second derivatives are for , we have . on the other hand , for , we have ( appendix [ ap : asymp ] ) .thus , the real solution of exists .due to for ( appendix [ ap : mono ] ) , the second derivative is positive , that is , is a monotonically increasing function of . from the monotonicity of , the equation has the one real solution ,that is , is the only one local minimum .therefore , if where exists , equation ( [ eq : spring2 ] ) has two real solutions . substituting into equation ( [ eq : spring2 ] ) , we obtain if with is positive , that is , has two real solutions : , which satisfy the inequality .as shown in appendix [ ap : red_pos ] when , is always positive .thus , the sufficient condition for the existence of the solution is . from equation ( [ eq : kappa0 ] ) , is always less than unity if .hence , the sufficient condition for the existence of the solution is .if and is small , can be larger than unity .for example , for , is larger than unity at . then , the equation may have no real solutions .figure [ fig : springnudif ] shows the dependence of on .the parameters are , and . for , has the two real solutions , and . on the other hand, has no real solutions for . if the reduction factor is positive , the solution always exists .the negative reduction factor might be unnatural , which means that the sign of the self - gravity term changes and the resulting frequency is larger than .although the velocity dispersion is considered , it would be natural that the gravity reduces the frequency . to avoid the negative reduction factor, we introduce the lower bound we redefine using . from the definition , is not positive . on the other hand , for , because of , we have .thus , must have a solution that is less than or equal to .when the original equation has a real solution , the modified equation has the same solution as that of the original equation .when the original equation does not have a real solution , the modified equation has a solution that is equal to .
|
we revisit the swing amplification model of galactic spiral arms proposed by toomre ( 1981 ) . we describe the derivation of the perturbation equation in detail and investigate the amplification process of stellar spirals . we find that the elementary process of the swing amplification is the phase synchronization of the stellar epicycle motion . regardless of the initial epicycle phase , the epicycle phases of stars in a spiral are synchronized during the amplification . based on the phase synchronization , we explain the dependence of the pitch angle of spirals on the epicycle frequency . we find the most amplified spiral mode and calculate its pitch angle , wavelengths , and amplification factor , which are consistent with those obtained by the more rigorous model based on the boltzmann equation by julian and toomre ( 1966 ) .
|
that the speed of earth - directed solar wind and the region from which it originates is tied to the large scale configuration of the photospheric magnetic field has been understood for at least 40 years . and originated the concept of the potential free source surface ( pfss ) model .pfss models assume that the coronal magnetic field is quasi - stationary and can , therefore , be described as a series expansion of spherical harmonics .pfss models have become the staple of solar wind prediction models that have increased in complexity over the years .for example , found an empirical inverse correlation between the super - radial expansion factor of a magnetic flux tube between the photosphere and source surface with the resulting solar wind speed observed at 1 au . then demonstrated that the observed correlation was consistent with simple wind acceleration models involving alfvn waves ( e.g. , * ? ? ? made further enhancements by accounting for stream - stream interactions of the wind en route to 1 au .the end result was a fairly accurate predictive model , that runs stably and continuously , for near real - time space weather forecasting . ]similar pfss - based predictive tools have been developed recently by applying these techniques to observations from soho / mdi .we do note , however , that pfss models make the assumption that the corona is current - free ( enabling the use of spherical harmonics ) , a flawed assumption in a significant fraction of the quiet corona that becomes worse in active regions .the magnetic range of influence " ( mroi ) was conceived by as a diagnostic to understand the partitioning of the doppler velocities observed by soho / sumer in a large equatorial coronal hole ( see , e.g. , fig .3 of * ? ? ?the mroi is simple realization of the magnetic environment in the photosphere , reflecting the distance required to balance the integrated magnetic field contained in any pixel in the magnetogram . in practice, it is calculated by repeated convolution of the input magnetogram with a circular kernel of increasing radius .while the mroi contains no directional information , it allows the partitioning of the magnetic field into open and closed regions . when the mroi is large , the magnetic field at that point is largely unbalanced and the magnetic environment is effectively open ." noticed that , in a coronal hole where the mroi is large , the field imbalanced and open significant outflow is seen in , while , in the quiet sun the mroi is typically small , balanced and closed , the mean doppler velocity is close to zero and the line intensities are a factor of 3 higher than in the open regions .put another way , magnetic environment leads to a preferential energy balance in the upper transition region plasma where the magnetically closed regions are dominated by plasma heating while open regions are dominated by kinetic energy showing decreased emission and strong outflow .this is supported by another perennial signature of coronal hole outflow , reduced hot oxygen charge states ( the ratio ; * ? ? ?* ) . in this letter, we will show that high mroi is not just indicative of outflow on small ( supergranular ) scales but also on the largest scales . by computing the mroi for synoptic magnetograms, we can show where the footpoint of earth - directed solar wind from a pfss model attaches .this approach will increase our understanding of the footpoint s movement , its basal energetic state and will result in the improved interpretation of in situ wind measurements .this is the interval studied in our forecast paper , which more - or - less corresponds to carrington rotation 2005 , contains two equatorial coronal holes ( echs ) on opposite sides of the sun that have opposite magnetic polarities .the upper panel of fig .[ fig:2005is ] shows a synoptic magnetogram for cr 2005 downloaded from the mdi archive at stanford university ] and downsampled by a factor of four - from pixels to pixels .the first 30 or so of the rotation lacks mdi data .the blue dots track the footpoint of the sub - terrestrial field line determined from pfss extrapolations for the period ( one dot per 96 min mdi magnetogram ) , labeling the progression of time ( right to left ) at the footpoint closest to noon each day . repeating the analysis of , we trace 32 additional field lines ( arranged on the perimeter of an ellipse on the source surface with semi - major axes ) back to the photosphere .the additional field lines allow us to compute the standard deviation of the separation between their footpoints and that of the sub - terrestrial field line thus providing error estimates for the location of each sub - terrestrial footpoint .the pfss extrapolation - derived projection of the heliospheric current sheet at the source surface down onto the photospheric magnetogram is shown in white while the boundaries of the coronal holes are shown in yellow .the coronal hole contours are derived from kitt peak spectroheliograms of 10830 .we clearly see that the footpoint progresses smoothly only in short segments across the map it then jumps from one segment to the next , with jumps of over both crossings of the neutral line .the lower panels of fig .[ fig:2005is ] shows in situ data observed at 1 au by ace , from top to bottom : the solar wind speed ; magnetic field strength and ( color - coded ) azimuth angle ( mag ; * ? ? ?* ) ; proton temperature & density , and oxygen charge state ratio ( swims ; * ? ? ?the time shown in these panels is that _ at ace _ , and runs from right to left , as in the synoptic magnetogram .the blue dashed vertical lines correspond to the time of the first magnetogram after the north - south crossing of the heliospheric current sheet occurred ( 2003 july 20 , 03:10ut ) and the return crossing ( 2003 august 1 , 08:03ut ) . throughout this letterwe adopt use the convention that blue lines correspond to crossings of the heliospheric current sheet , while red lines correspond to when the footpoint enters or exits coronal holes .the various red lines correlate well with coronal hole outflow as defined by low ratio and high .also , after the shocks ( blue dashed lines ) show the classic signature of a current - sheet crossing : as the magnitude increases ( field - line packing ) , the field direction rotates smoothly through over .again , it is all well and good explaining the correlations between the synoptic magnetogram and in situ data _ a posteriori _ , but can we explain why the pfss footpoint should jump of longitude into the coronal hole , or the other jumps for that matter ? herein lies the usefulness of the mroi : we calculate the mroi using the synoptic magnetogram that is the first panel of fig .[ fig:2005is ] as input and show the result in fig .[ fig:2005 ] .we see that the mroi map is very patchy and that the footpoint jumps from one patch of ( relatively ) high mroi to the next .the northern and southern activity belts are still immediately apparent , but the two main equatorial coronal holes have the highest values . when there is no close `` island '' ( within some 30 ) of higher mroi , the footpoint stays anchored or moves slowly within the island of higher mroi ( e.g. , july 1218 ) , but when there is no real dominant region , the footpoint moves rapidly across the solar photosphere ( e.g. , july 1922 , where it is also crosses the neutral line ) .[ fig : wsm ] combines an mroi map as in fig .[ fig:2005 ] with the same interplanetary variables as fig .[ fig:2005is ] , but for the trailing half of cr 1912 and the leading half of cr 1913 .this is the original `` whole sun month '' ( wsm ) .since ace was not launched until august 1997 , the in situ data comes from the omni database for the plasma and magnetic field data , and the composition data comes from the sms experiment on wind .the lower temporal resolution of the data is apparent , and ( perhaps not unrelated ) , there is less of a clear signature of coronal hole outflow in the bottom panel of fig .[ fig : wsm ] .again , we show three dashed vertical lines to indicate the reference times of large - scale footpoint motion ( from right to left ) : the footpoint jumps across the heliospheric current sheet from edges of the southern polar coronal hole ( pch ) to the northern pch ( 1996 august 19 , 09:35ut ) ; the footpoint jumps into the narrow extension of the northern pch see , e.g. , fig . 2 of colloquially called the `` elephant s trunk '' ( 1996 august 24 , 12:47ut ) ; the footpoint leaves the pch extension , re - crosses the heliospheric current sheet and attaches to the trailing edge of ar 7986 ( 1996 august 29 , 06:23ut ) .the changes in solar wind conditions corresponding to the two current sheet crossings and jump into the coronal hole are again clearly visible .the whole heliospheric interval ( whi ; * ? ? ?* ) comes close to the absolute nadir of the solar cycle , when there were weeks with no magnetic regions of any size on the solar disk .it was somewhat of a surprise , therefore , to see the `` train '' of three equally spaced active regions ( noaa ars 10987 , 10988 and 10989 ) across the disk .[ fig:2068 ] again overlays the mroi with 10830 - defined coronal hole boundaries , pfss footpoints and heliospheric neutral line .we see how little magnetic field there is on the disk by the very low mroi values , and general lack of contrast .the northern coronal hole is completely unbalanced , as is ar 10987 .indeed , the latter s influence extends from 2008 march 21 , when the footpoint jumps by 94 ( ! ) of longitude to the lead edge of ar 10987 . at this time, the carrington longitude of the footpoint ( 270 ) corresponds to a heliolongitude of w65 ; the path of the field line through the corona is highly convoluted . while the yellow contours suggest that the footpoint is actually between the core of the active region and the ech that precedes it , inspection of euv data ( and the in situ panels of fig .[ fig:2068 ] ) suggest that the footpoint is in the ech .the footpoint stays close to , or on , ar 10987 until late on 2008 march 29 , where the footpoint jumps by 60 of longitude over the neutral line to the trailing edge of ar 10989 .after 3.75 days connected to ar 10989 , the footpoint moves into a large extension of the southern polar coronal hole .however , the high speed coronal hole outflow catches up to the ( notably ) slow active region outflow , and entry into the coronal hole occurs ( red dashed line ) just less than two days after the current sheet crossing .both our coronal hole entry and exit predictions are well matched to the observations .the agreement of predicted and actual timing of the shocks is less good than the other intervals studied above .the first shock , corresponding to the 90 jump to ar 10987 , is too early by 32.4 hours while the second shock , corresponding to the jump from ar 10987 over the neutral line to ar 10989 is in much better agreement with prediction .there are two obvious explanations for being off by over a day : ( 1 ) we fail to adequately account for the ( tortuous ) path of nascent solar wind through the corona from w65 to the sub - terrestrial point ; and ( 2 ) the errors introduced from observing w65 with a line - of sight magnetogram leads to errors in the pfss model i.e ., in reality the footpoint does nt jump until a day or so later .another , related possibility is if the synoptic map used to compute fig . [ fig:2068 ] is generated from thin strips from a large number of magnetograms over the full rotation when each ( carrington ) longitude is the central meridian as seen from soho .it fails to account , therefore , for the evolution of ar 10987 in the 5.5 days it takes took to rotate from w65 to disk center .in three examples presented here , over all phases of the solar cycle , synoptic mroi images provide a striking and easy - to - interpret map of the earth - directed solar wind source region . when there are islands of high mroi , the footpoint remains connected to that island until another , more `` enticing , '' island rotates closer to the central meridian .we have seen that this battle for magnetic supremacy leads to the solar maximum wind structure that has many staccato jumps due to the distribution of large mroi regions on the disk , while at solar minimum the wind has a largely repeating structure with the footpoint meandering from one supergranular vertex to another at disk center or in the polar regions .we should note that even though the scaling of the mroi correlates well with nascent outflow velocity in the upper transition region / low corona ( i.e. , doppler shifts ; * ? ? ?we would not expect a quantitative correlation between mroi and solar wind velocity at 1 au both strong active regions and deep coronal holes have large mroi values , but they give rise to very different winds speeds and composition .however , such verification is beyond the scope of this letter and is reserved for future work .clearly , equatorial coronal holes have a significant impact on the in situ wind parameters observed .based on the evidence presented in mcintosh et al .( 2006 , 2007 ) the boundary of the equatorial holes observed in the upper transition region is ( spatially ) abrupt in that the spectroscopic diagnostics differ dramatically between the open and closed magnetic regions on the scale of a few arcseconds on crossing the boundary .however , what is not clear is the extension of that boundary into interplanetary space , the extension to the inner heliosphere , the effects of interchange reconnection and the jumping of sub - terrestrial field line footpoint to flux regions with large mroi . in fisk s model ,the open field line moves over the photosphere a distance that is determined by the size of the interacting loop .one might argue that the steady walk of the footpoint through the coronal holes in all three examples supports fisk , where any closed loops are likely to be small , but what of the more heterogeneous quiet sun ? can the jumps from one large mroi region to another ( in equatorial holes , active regions or even across the neutral line ) be explained by these mroi regions forming `` basins of attraction '' at significantly larger distances from the sun than the supergranular scales for which the mroi was designed ?one might speculate that a `` likelihood of jumping to here '' function would involve the strength of the mroi at a point , and a diminishing function of distance between the current footpoint and the candidate point , either across the photosphere , or along a loop ( i.e. , up into the corona , c.f .* ) , but much more work is needed ( and planned ) to fully understand the nature of large - scale footpoint jumps . while the biggest drivers of geostorm activity are coronal mass ejections ( which no static , synoptic - based model can allow for ) we know that the stream - stream interactions caused by the footpoint jumping from point to point at solar minimum can certainly generate shocks with sufficient momentum to impact the geomagnetic indices , , etc .( e.g. , * ? ? ?estimating the timing of these jumps accurately is critical for any predictive model of space weather .while the essence of the mroi jump conditions are not yet known , the mroi is a visualization tool that offers a great deal of predictive potential ; with the study of many epochs we hope to develop predictive intuition and will undoubtedly advance our interpretation and prediction of solar wind conditions observed at 1 au . as an operational concern, the time needed to generate mroi maps for automatically updated synoptic magentograms , takes about 20 hours for conditions close to solar max , and about 4 hours for a solar minimum magnetogram on a standard desktop workstation , for a grid , as shown in figs .[ fig:2005][fig:2068 ] . reducing the grid size by a factor of 4reduces typical computation times to about two hours , but the computation naturally lends itself to parallelization and can be done much faster , if need be .the work presented in this letter was supported by the national aeronautics and space administration under grants issued from the living with a star targeted research & technology program ( nnh08cc02c to rjl and nnx08au30 g to swm ) ._ soho _ is a mission of international cooperation between esa and nasa .the national center for atmospheric research is sponsored by the national science foundation .
|
we present a new method of visualizing the solar photospheric magnetic field based on the `` magnetic range of influence '' ( mroi ) . the mroi is a simple realization of the magnetic environment in the photosphere , reflecting the distance required to balance the integrated magnetic field contained in any magnetogram pixel . it provides a new perspective on where sub - terrestrial field lines in a potential field source surface ( pfss ) model connect to the photosphere , and thus the source of earth - directed solar wind ( within the limitations of pfss models ) , something that is not usually obvious from a regular synoptic magnetogram . in each of three sample solar rotations , at different phases of the solar cycle , the pfss footpoint either jumps between isolated areas of high mroi or moves slowly within one such area . footpoint motions are consistent with fisk s interchange reconnection model . received december 12 , 2008 ; accepted april 2 , 2009 .
|
uncovering the complex mechanisms involved in gene regulation remains to be a major challenge .while the biochemical processes involved in the expression of a single gene are increasingly well understood , the interplay of whole networks of genes poses additional questions and it is unclear what level of system - specific detail has to be taken into account to describe a gene regulatory network .one promising approach to modeling system wide dynamical states of a network is to go to an abstract level of description which may even include discrete deterministic models such as boolean or threshold networks .we here extend a recent model of the yeast cell cycle dynamics that is successfully based on this approach .the cell cycle of the budding yeast saccharomyces cerevisiae is a widely studied example of a robust dynamical process . in ,the yeast cell cycle was modeled in the framework of a discrete threshold network . from the data in , eleven genes that play a key role in the cell cycle processwere identified along with their known ( direct or indirect ) interactions .the activity of a certain gene is modeled as a two - state system , with values 1 ( active ) or 0 ( inactive ) . using a threshold model of interactions , the biological sequence of activity states in the processis exactly reproduced .the authors also find considerable dynamical robustness properties that can be traced to the properties of the basin of attraction of the biological fixed point .remarkably , these results were obtained using a discrete time model , where each discrete time step is defined by the intervals between activity changes . in the model, the activity state of every gene is determined solely by the state of its transcription factors at the previous time step .it is remarkable , that in this case at least , the biochemical stochasticity of gene regulation can be neglected in the model . in particular ,as was shown in , attractors under synchronous dynamics can be unstable if stochasticity is imposed on the transmission times . in this work, we investigate whether the cell - cycle process is stable under such perturbations .investigations of dynamical robustness have been discussed in a variety of different biological systems , such as segmentation in the fruit fly , or two - gene circadian oscillators .different conceptions of the word ` robustness ' have been used .robustness against mutations means that a specific process can be performed reliably by a system even if some changes to the structure of the system are conducted .the yeast cell - cycle is remarkably robust in this sense .other approaches to assessing robustness in biological networks include local stability and bifurcation analyses , stability under node state perturbation and probabilistic boolean networks . in this workwe will concentrate on the robustness under stochastically varying processing times ( for protein concentration buildup and decay ) as was considered in . other models of the yeast cell - cycle include molecular models of major cdk activities in start and finish states and of s - phase entrance in . in differential equations have been used to fit time - courses of protein concentration levels in the yeast cell - cycle network .[ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] following , a network of eleven nodes is used to describe the cell cycle process .they are given in table [ tsequence ] , along with the synchronous sequence of activity states recorded in that work .using a technique introduced in we extend that model to include fluctuating transmission delays and to allow for real numbers for protein concentrations levels ( for protein ) .we keep the characteristics of the description of , that is the effect of protein on the transcription of protein is determined by a discrete activity state ( ` active ' or ` inactive ' ) of protein . in our continuous description , we set the activity state of a protein to 1 if the concentration is above a certain threshold ( ) , otherwise it is 0 .the transmission function that determines the transcription or degradation of protein is given by where is the transmission delay time that comprises the time taken by processes such as translation or diffusion that cause the concentration buildup of one protein to not immediately affect other proteins .the numbers determine the effect that protein has on protein .an activating interaction is described by , inhibition by . if the presence of protein does not affect expression of protein , . if , the value of depends on whether the node is modeled as a self - degrader .self - degraders are those nodes that are down - regulated by external processes ( cln3 , cln1,2 , swi5 , cdc20/cdc14 , mcm1/sff ) .self - degrader nodes will take a value whereas the transmission function of non - self - degraders is left unchanged , i.e. the last time when determines the state at time .we now describe the time evolution of the system of genes by the following set of delay differential equations for the simple transmission function given above , this equation can be easily solved piecewise ( for every period of constant transmission function ) , leading to charging behavior of the concentration levels , ).,width=321 ] this has the effect of a low - pass filter , i.e. , a signal has to sustain for a while to affect the discrete activity state . a signal spike , on the other hand , will be filtered out. concentration buildup in our model is depicted in figure [ fcharging ] . here , the transcription factor of a protein is assumed to be present in the time span to ( upper panel ) .the production of the protein starts after the delay time ( here ) and the concentration crosses the critical level of at ( central panel ) , switching the activity state to `` on '' ( lower panel ) . in the case of very fast build - up and decay ( in eq .( [ solution ] ) ) and with the delay time set to one ( ) , we exactly recover the synchronous dynamics of . thus , our described model is a simple generalization of the synchronous case to allow for a continuous time description .we now ask the following question : is the original sequence stable under stochastic timing noise ( stochastically varying signal delay times ) or can the noise cause the system to assume different states ? as the sequence from ( reproduced in table [ tsequence ] ) runs into the stationary fixed point and an external signal is needed to trigger the starting state again , we create a repeating cycle of states ( limit cycle ) by explicitly adding the rule that cln3 production is triggered as soon as the final state in the synchronous sequence is reached. we will investigate whether this limit cycle is inherently stable or whether it needs the perfect synchronization of the artificial synchronous update . in this contextit is important to note that the stability of the complete cell - cycle system also depends on the behavior of all other proteins involved .however , the stability of the core genes is most important , as they regulate the other proteins .only if the regulators perform reliably , the system as a whole can be robust . to compare the time series of our simulations with the discrete time steps of the synchronous case ,we record a time step whenever the system keeps all its activity states constant for a time span of at least . with every switch of activity states ( say , at time ) we check whether the transcription of any other protein p is affected .if so , the concentration level of protein p will begin to rise at time where denotes a uniformly distributed random number between and that perturbs the delay times .our simulation time is not directly related to the actual time intervals of the biological processes involved .however , we are not so much interested in the specifics of the time course but rather in the properties of stability and for this assessment it is not important how long the actual phases take .our model captures two principles of real world gene regulatory networks : interactions occur with a characteristic time delay ( denoted by ) ; and we use continuous concentration levels and implement low pass filter behavior due to protein concentration buildup with a characteristic time .first , we check if the system reproduces the synchronous sequence under small perturbations of the delay time .thus we stay in the regime where is significantly smaller than the characteristic protein decay or buildup time . in the main simulationruns we set , and , but any numbers that fulfill give the same results .we find that the synchronous sequence of states is reliably reproduced by this stochastic dynamics .even long simulation runs of can not push the system out of the original attractor .this means that the biological sequence is absolutely stable against small perturbations . to understand this , we look at the synchronous sequence of states in table [ tsequence ] . in steps , , , , , ( marked blue in the table ) only a single protein changes its activity state . if all steps were of this kind , fluctuations of the event times would not be able to destroy the attractor at all .states marked in red denote events where multiples switches happen at the same time . to illustrate this point ,let s assume two nodes switch their states at times and ( we call this a ` phase lag ' ) .the system thus assumes an intermediate state in the time span between and .approximately at time the next switches occur and due to the intermediate state it is possible that proteins switch their states which would normally be constant in this step . because of the charging behavior of the concentration levels , these ` spikes ' will be filtered out . the only way to destroy the attractor isthus when the phase lags add up in a series of steps .this can not happen in the yeast cycle , however , due to the states marked in blue color in the table .when only one protein changes its state in a time step , all divergence of signal times will be reset and the synchrony is restored .we therefore call these steps ` catcher states ' as they remove phase - lags from the system .now that we know that small perturbations can not drive the system out of the synchronous attractor , we want to investigate stability under stronger noise . to address this question, we have to loosen our definition of stability . up to now, we have requested the system to follow the exact sequence of states of the synchronous dynamics .it is clear that this strict stability can not be obtained if we increase the noise to be more than half of the transmission delay itself , because two nodes switching at the same synchronous time step can receive switching times that differ by more than .the intermediate step taken when only one node has switched obviously violates the stability criterion . to assess the stability of the system under strong noise, we employ a different stability criterion .we let the system run with the sole constraint that the stationary state will be assumed regularly for a time span of at least .any fluctuations occurring inbetween two incidences will be tolerated , as long as the system finds its way to the state of the cell cycle in which growth occurs until the cell size signal is triggered .although this might seem too loose a criterion for robust biological functioning , one has to remember that the cell - cycle process is also backed up by a system of checkpoints that can catch faulty system states .we investigate here the inherent stability of the system disregarding these checkpoints but at the same time allowing more variability in the sequence . .black boxes denote active states , white means inactive . on a micro - time levelthe effect of fluctuations is visible , but on a larger time scale the dynamics is very stable.,width=321 ] remarkably , with noise of the order of the delay time and largely independent of the filter used , the system reliably stays in the biological attractor .an example run with , and ran for a time of following the biological attractor sequence ( in the wider sense mentioned above ) .a typical time span of this run is shown in figure [ stablenoise ] .this is a surprising result , because in general one expects a system to be able to leave its attractor sequence under such strong noise if a series of multi - switch events ( steps 5 to 8) is involved anywhere during the sequence .our proposed criterion is not trivially fulfilled : by changes in the sequence of switching events or by delaying one of several events that occur at the same synchronous time step , a new sequence could be triggered .this could force the system to jump into one of the other six fixed points identified in without the possibility to return to the biological sequence . in figure [ strongnoise ]we show an example of a simulation run with extremely strong noise that shows that the system can jump out of the attractor . however , it is also apparent that even under such strong fluctuations the system runs quite regularly until it finally loses its attractor sequence . .after some repetitions of the biological state sequence the attractor cycle is lost and a fixed point is assumed.,width=321 ] we now quantify the stability of the biological pathway under such strong noise .how likely is it for the system to lose its biological sequence and to run into a different fixed point ? to address this question , we initialize the system at the start state again and check whether it completes the cycle . again , we use the lose criterion described above , which means we only request the system to reach the start state again . in figure [ errorsvsnoise ]we show the ratio of erroneous runs of the biological pathway plotted against the noise level . it can be clearly seen that for reasonable noise levels the ratio of sequence runs not ending in a biological fixed point is very small .in fact , even with unrealistically high noise levels of or more ( which amounts to arbitrary update times ) , only in a quarter of the runs the system jumps out of the biological state sequence .the by far dominating cause for this ( very small ) instability is the first step ( cf .table [ tsequence ] ) where both sbf and mbf are activated by cln3 .if the cln3 concentration is degraded before activating the transcription of either sbf or mbf , the system loses the biological sequence .if we explicitly force cln3 activity to sustain long enough to make sure that both sbf and mbf are produced , even this small instability vanishes and the system assumes practically complete stability for all reasonable noise levels ( erroneous runs at ) .this superstability is due to the fact that all proteins keep their activity states for an extended time .extremely strong noise is therefore needed to delay a single activity switch long enough to significantly perturb the system .we have tested all results with a wide variety of parameters . with a fixed number for the delay time , only the noise level and the characteristic protein buildup time can be adjusted .our results are completely robust against changes of , even removing the filter completely or setting it an order of magnitude larger than the delay time does not affect the robustness properties described above .as we have shown in the previous section , the yeast cell - cycle control network is astonishingly stable against fluctuations of the protein activation and degradation times .the network and the resulting dynamics exhibit a number of features that cause this stability : as was already discussed in , the basin of attraction is very large , making it unlikely that an intermediary state belongs to one of the other fixed point basins .a second remarkable property is that all node states are sustained for at least three ( synchronous ) steps , making the system less dependent on the specifics of the concentration buildup procedure .third and most important for the observed superstability under noisy transmission times , is the presence of the catcher states which prevent the system from gradually running out of synchrony .thus , we have seen that without even taking into account the biological checkpoint mechanisms that give additional stability and error - correction features , the system shows a strong inherent robustness against intrinsic fluctuations . in this example of the yeast cell - cycle dynamics , potential mechanisms that provide robustness under biological noise can be observed . a system without an external clock ( or any other external control ) can still run reliably if it has intrinsic features that enforce robustness : catcher states , persistence of states and an attractor landscapes that minimizes the possibilities to escape the biological sequence . to conclude, we have investigated the stability of the cell - cycle network by extending the model of li et al . to allow asynchronous updating of the activity states of the genes .we find that the system exhibits robust behavior under noisy transmission times . even without taking into account the checkpoint mechanisms that give additional stability and fallback features ,the system shows a strong inherent robustness that aids in maintaining reliable functioning .chen , k .- c . ,wang , t .- y . , tseng , h .- h . , huang , c .- y .f. , and kao , c .- y .a stochastic differential equation model for quantifying transcriptional regulatory network in saccharomyces cerevisiae .12:28832890 .hirata , h. , yoshiura , s. , ohtsuka , t. , bessho , y. , harada , t. , yoshikawa , k. , and kageyama , r. ( 2002 ) .oscillatory expression of the bhlh factor hes1 regulated by a negative feedback loop ., 298(5594):840843 .lee , t. i. , rinaldi , n. j. , robert , f. , odom , d. t. , bar - joseph , z. , gerber , g. k. , hannett , n. m. , harbison , c. t. , thompson , c. m. , simon , i. , zeitlinger , j. , jennings , e. g. , murray , h. l. , gordon , d. b. , ren , b. , wyrick , j. j. , tagne , j .- b . ,volkert , t. l. , fraenkel , e. , gifford , d. k. , and young , r. a. ( 2002 ) .transcriptional regulatory networks in saccharomyces cerevisiae . , 298(5594):799804 .spellman , p. t. , sherlock , g. , zhang , m. q. , iyer , v. r. , anders , k. , eisen , m. b. , brown , p. o. , d. , b. , and futcher , b. ( 1998 ) .comprehensive identification of cell cycle - regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization . , 9:32733297 .
|
gene regulatory dynamics is governed by molecular processes and therefore exhibits an inherent stochasticity . however , for the survival of an organism it is a strict necessity that this intrinsic noise does not prevent robust functioning of the system . it is still an open question how dynamical stability is achieved in biological systems despite the omnipresent fluctuations . in this paper we investigate the cell - cycle of the budding yeast saccharomyces cerevisiae as an example of a well - studied organism . we study a genetic network model of eleven genes that coordinate the cell - cycle dynamics using a modeling framework which generalizes the concept of discrete threshold dynamics . by allowing for fluctuations in the transcription / translation times , we introduce noise in the model , accounting for the effects of biochemical stochasticity . we study the dynamical attractor of the cell cycle and find a remarkable robustness against fluctuations of this kind . we identify mechanisms that ensure reliability in spite of fluctuations : ` catcher ' states and persistence of activity levels contribute significantly to the stability of the yeast cell cycle despite the inherent stochasticity . + _ keywords _ : gene regulatory network ; yeast cell cycle ; boolean models ; computer simulations ; robustness
|
jaynes formalism ( jf ) of the maximum entropy principle applied on the boltzmann - gibbs - shannon ( bgs ) entropy reproduce correctly the exponential maximum probability distribution functions ( pdfs ) , obtained from the theory of thermodynamics . is the maximum configuration function of the bgs - statistical ensemble and are the associated configuration probabilities .this celebrated result has established the above formalism as a standard procedure for the computation of the maximum pdfs for an arbitrary generalized entropic structure , where is a set of parameters . for specific values of these parameters , ,the quantity tends to in eq .( [ bgs ] ) . in jfthe probability functional is extremized , namely , .the constants and are the lagrange multipliers associated to the normalization and mean value constraint ( , ) , respectively .for we have the ordinary mean value ( omv ) definition . in the frame of generalized thermostatistics a common definition of the mean value is , where is the generalization parameter of the respective entropy .this definition is called escort mean value ( emv ) . is the observed quantity for a system under consideration .some well known generalized entropic structures , which have been explored within jf are the one - parametric tsallis , rnyi and nonextensive gaussian ( neg ) ones , defined as where are the respective maximum configuration functions and is the deformed logarithm defined by tsallis and collaborators all of these entropies are based on the generalized logarithmic function ( [ tsallislogarithmicfunction ] ) and its inverse function }_{+}^{\frac{1}{1-q}}\qquad { \left ( \lim_{q\ra1}\exp^{{\mathrm{t}}}_{q}(x)=\exp(x ) \right)},\end{aligned}\ ] ] with }_+={\mathrm{max}}\{0,x\} ] is the -lambert function .the neg - entropy has not been studied explicitly for omv .however , it is easy to see that in this case one would obtain ordinary exponential distributions , since in a recent study one of us ( to ) has constructed two generalized multinomial coefficients ( gmc ) , and , based on different generalized factorial operators , and , from which , and may be derived . and are two sets of parameters , and .then , the relation between these parameter sets and the parameters in , and is given by and .the results about the maximum pdfs for the tsallis entropy computed from are in accordance with eq .( [ tsallisexponentialfunction1 ] ) and computed from are in accordance with eq .( [ tsallisexponentialfunction2 ] ) .furthermore , in both cases the -ranges were determined , namely ] for both gmc s ) and the nonextensive gaussian entropy is maximized for - and -exponential distributions in eqs .( [ tsallisexponentialfunction1 ] ) and ( [ tsallisexponentialfunction2 ] ) , for and , respectively .these gmc results have a consequence that eq .( [ renyitsallis ] ) is physically ( ) and mathematically ( ) incorrect .the above mentioned discrepancies in the last paragraph indicate a problem in the mathematical structure either of jf or of gmc , since they may yield different results computing the maximum pdf of an entropy definition . in the present manuscriptwe would like to explore which mathematical approach gives proper results and determine the origin of the problem in the respective approach . in section [ sec:2 ] , based on the concept of extensivity, we argue why the results of jf are in general incorrect . in section [ sec:3 ]we investigate the entropic structures on which jf is applicable . in the last sectionwe draw our conclusions .a general expression of a deformed entropic structure may be given in the following way where is a deformed logarithm and for specific values of its parameters , , it tends to the ordinary definition of the logarithmic function . in shake of simplicity , in what followswe shall consider for equal configuration probabilities , , without losing the generality of the results .then , we obtain we recall at this point that a very important property of an entropic form is its extensivity with respect to the variable under consideration . for instance , such a variable in thermodynamics may be the size of a statistical system .then , the entropy production of the system must be proportional to .taking into account eq .( [ genent1 ] ) , it becomes evident that the maximum configuration function has to present the inverse structure of the deformed logarithm , namely , .if the system under consideration comprises different types of elements , then takes the form under the constraints using the gmc approach , eqs .( [ maxconffun1 ] ) and ( [ constraints1 ] ) have been analytically derived in the specific case of tsallis entropy and can be shown to be valid for any entropic structure of the form ( [ genentstructure ] ) . replacing eq .( [ maxconffun1 ] ) in eq .( [ genent1 ] ) we obtain the desired extensivity with respect to : as can be seen in eq .( [ maxentfun1 ] ) , the function in thermodynamics may be related to a generalized boltzmann constant .returning to the results of gmc and jf for , and we make in the following subsections some crucial comments , which have not be considered explicitly in literature up to now .as discussed in the introduction the application of jf on tsallis entropy leads to -exponential and ( )-exponential distributions for ordinary and escort mean values , respectively .however , it is easy to verify considering , that pdfs of -exponential type makes extensive , since it is the inverse -function , while pdfs of ( )-exponential form do not . according to gmc approach, the -exponential distributions are obtained from the definition with ] .we notice that , the distribution maximizes the entropy with ordinary mean values .> from gmc we determine the range of -values for which vary between ] and is maximized for ordinary exponential distributions .these results are in agreement with the concept of extensivity . in fig .[ fig.renyi ] we present some plots of the rnyi entropy for equal probabilities and types of elements , considering the jf maximum pdfs given by and the gmc maximum pdfs given by .figrenyi.eps as can be seen , and argued above , we obtain extensivity only for . additionally , when the entropic parameter varies in the range , the values of are smaller than the ones for .thus , pdfs obtained from jf do clearly not maximize the rnyi entropy . in case of nonextensive gaussianentropy one obtains from jf with ordinary and escort mean value constraints ordinary and -exponential distributions .however , considering , we verify that both distributions do not preserve extensivity .the pdfs which maximize are -exponential distributions , as in eq .( [ tsallisexponentialfunction1 ] ) . taking into account the results obtained from the gmc , we see that pdfs of -exponential type are derived from the definition with . in fig .[ fig.neg ] we present in analogy to fig .[ fig.renyi ] three plots of the equiprobabilized neg - entropy , based on an ordinary , - and -exponential distribution .again we see that the extensivity is preserved only for the -exponential pdf , while the values of based on the -exponential pdf are lower than the ones based on the -pdf , for the same -value .accordingly , the neg - entropy is optimized by pdfs of -exponential type .figneg.eps the above discussion implies , with regard to gmc approach , that only the definition gives proper results , and with regard to jf , that the correctness of the obtained maximum pdfs for an entropic structure is not guaranteed . in the next section we shall demonstrate the origin of the observed inconsistencies within jf .the computation of the maximum pdf for a given entropic definition and the determination of the parameter range through the respective gmc is not part of the scope of this article and will be explored separately .let us first consider the bgs - entropy in eq .( [ bgs ] ) .the inner structure of is of the form .then , the derivation with respect to the variable gives as can be seen in the right hand side of eq .( [ derivationbg ] ) , the reason of obtaining exponential pdfs in jf for the bgs - entropy , is the remaining logarithmic dependence on , . since the exponential function is the inverse logarithmic function , the extensivity of is preserved ( see eqs .( [ maxconffun1 ] ) and ( [ maxentfun1 ] ) for ) .this observation is of major importance for our further investigation .it means that jf may lead to correct results if the following two conditions are satisfied : i ) the entropy definition is of trace - form , as in eq .( [ genentstructure ] ) and ii ) the derivation of its inner structure presents the form with the constraints .we notice that the nonsingular functions and are _ independent _ from the variable .the second condition guaranties the extensivity of the entropy . > from the middle and right hand side of eq .( [ derivgenfun ] ) we can determine the structure of the deformed logarithm .substituting , and , we obtain this is a first order ordinary linear differential equation .its solution is given as follows where is the integration constant .in the limit eq .( [ solution1 ] ) should tend to . taking this constraint into account, we can easily verify that the constants and must be of the form resubstituting , we obtain in other words , jf leads to correct results only when the trace - form entropic structure is based on the deformed logarithm ( [ genlog ] ) . for the respective entropy is in eq .( [ tsallisentropy ] ) . here, it becomes evident why the results of the jf and gmc approaches coincide in the case of tsallis entropy , while for and they do not . in further, we shall give some more examples where jf does not gives proper results .an entropic structure emerging in the context of special relativity , is the one defined by kaniadakis as follows with the -generalized logarithm the inverse -generalized logarithmic function has the form }_+^{\frac{1}{\kappa } } \qquad{\left ( \exp_{\{\kappa\ra0\}}(x)=\exp(x ) \right)},\end{aligned}\ ] ] which maximizes , according to our previous discussion , the entropy .however , computing the inner structure of , we obtain }&=f_1(\kappa)\ln_{\kappa}(1/x)-f_2(\kappa;x),\\ f_1(\kappa)&=1-\kappa,\\ f_2(\kappa;x)&=x^{\kappa}.\end{aligned}\ ] ] as can be seen the function presents explicit dependence on and thus the variation of the functional in eq .( [ jf1 ] ) for can not reproduce pdfs of -exponential type .the application of the -logarithm ( [ tsallislogarithmicfunction ] ) on -multiplied variables gives an ordinary sum of -logarithms applied on each variable separately .an interesting mathematical question is whether one can define a deformed logarithm whose application on a -product would give a deformed sum of these logarithms .this point was explored in ref . by schwmmle and tsallis who introduced the two parametric generalized logarithm with its inverse function the respective entropic structure based on eq .( [ st - log ] ) is of the form the derivative of its inner structure leads to the following result }&=f_1(q , q';x)\ln^{{\mathrm{st}}}_{q , q'}(1/x)-f_2(q;x),\\ \label{st - deriv-2}f_{1}(q , q';x)&=1-(1-q')x^{q-1},\\ \label{st - deriv-3}f_2(q;x)&=x^{q-1}.\end{aligned}\ ] ] same as in the case of kaniadakis entropy ( [ kanent ] ) , from eqs . ( [ st - deriv-1 ] ) - ( [ st - deriv-3 ] ) it becomes obvious that the deformed exponential function ( [ st - exp ] ) can not be obtained from jf .another two - parametric trace - form entropy , , is the one proposed by borges and roditi in ref . , based on the deformed logarithm is generated by applying the -generalized derivative introduced by chakrabarti and jagannathan on the probability functional , with respect to , and then setting .( [ br - log ] ) is not analytically invertible .the ordinary derivative of the inner structure of yields the following result }&=f_1(q , q')\ln^{{\mathrm{br}}}_{q , q'}(1/x)-f_2(q;x),\\ \label{br - deriv-2}f_{1}(q , q')&=q+q',\\ \label{br - deriv-3}f_2(q , q';x)&=\frac{q'x^{q-1}-qx^{q'-1}}{q'-q}.\end{aligned}\ ] ] in contrast to schwmmle - tsallis case , the function does not depend on the variable . on the other hand, the existence of the -dependence in implies that the type of the probability distribution computed from jf is not the inverse -function and thus does not preserve extensivity of the respective entropy .we notice that for , tends to the one - parametric quantum group entropy introduced by abe .accordingly , the maximum probability distributions of abe s definition obtained from jf do not optimize quantum group entropy either . in ref . anteneodo and plastino have constructed a generalized entropy whose maximization according to jf yields pdfs of stretched exponential type . in terms of a deformed logarithmthis entropy is given as where considering eq .( [ ap - log ] ) we observe that the inverse function of the -logarithm is not a stretched exponential one .thus , a stretched exponential function does not make the entropy extensive .additionally , the -logarithm is not analytically invertible .thurner and hanel in ref . have estimated the difficulty of obtaining the proper maximum pdfs for an entropic form within jf created by the term in the middle hand side of eq .( [ derivgenfun ] ) . in order to solve this problem they added one more term in the bgs - entropy definition ,whose structure eliminates the aforementioned term .their entropy definition is given as }+c=-\sum_{i=1}^{\omega_{{\mathrm{max}}}^{gg}}\int_0^{p_i}dx\,\ln_{gg}(x)+c\end{aligned}\ ] ] where is a constant .indeed , in this case the application of jf on leads to the inverse function of , namely , .however , the authors did not take into account that the addition of the integral - term may break the extensivity of .let us see this situation closer .for equal probabilities tends to since is the inverse function of , the maximum configuration function has the form . assuming , that is integrable with and , and substituting , then we obtain extensivity is succeeded in when the following condition is fulfilled under specific assumptions the authors presented analytical expressions of the functions and given as follows }^2 \right ] } \qquad { \left ( \exp_{gg}(\gamma\ra0;x)=\exp(x ) \right)},\\ \label{th - log}\ln_{gg}(\gamma;x)&:=-{\left [ (2\gamma)^{-1}{\mathrm{erf}}{\left ( \gamma\sqrt{-\pi\ln(x ) } \right ) } \right]}^2 \qquad { \left ( \ln_{gg}(\gamma\ra0;x)=\ln(x ) \right)}.\end{aligned}\ ] ] one can easily verify that the generalized definitions ( [ th - exp ] ) and ( [ th - log ] ) do not satisfy the condition ( [ condition1 ] ) and thus depending on does not become extensive for pdfs of type .we would like to stress that the structure of entropy ( [ th - entropy ] ) has been first introduced in ref . based on a different context .we have shown that the application of jaynes formalism on an arbitrary generalized entropic structure may yield incorrect types of maximum entropy probability distributions .there are two necessary conditions that must be fulfilled , in order to obtain proper results from the aforementioned formalism .the first condition is related to the structure of a generalized entropy definition depending on a parameter set , which must be of trace - form , with .the second condition , given in eq .( [ derivgenfun ] ) , preserves the very important property of extensivity of the entropy . from eq .( [ derivgenfun ] ) we could determine the explicit structure of the -logarithm , presented in eq .( [ genlog ] ) .this is the only generalized logarithm under which the ordinary extremum constraints of the boltzmann - gibbs - shannon entropy do not change . for and deformed logarithm tends to the one defined by tsallis and coworkers and the respective deformed entropy is tsallis entropy .inverting the above statement , we see that the definition of the extremum constraints depends strongly on the structural choice of the generalized entropy .this implies that the generalization procedure of the boltzmann - gibbs - shannon statistics is not a pure mathematical concept but it carries physical information .the physics of a statistical ensemble is projected on the extremum constraints .thus , one should be very careful about the choice of the , since changes in the extremum constraints indicates changes in the physics of the system under consideration .it becomes evident at this point why the choice of tsallis entropy as a possible generalization of the boltzmann - gibbs entropy in statistical thermodynamics is so special . in the frame of tsallisgeneralized thermostatistics , based on the entropic structures and for the respective parameter ranges ] , the correct maximum entropy probability functions ( and , respectively ) , within jaynes formalism are associated to the _ ordinary mean value _ definition .furthermore , in a recent study abe showed that the escort probability distributions , which have been widely used in the literature , are not stable for specific probability perturbations .considering the above results we conclude that the introduction of the aforementioned escort probability distributions has no physical hypothesis within generalized thermostatistics .we would like to thank e. p. borges for his very fruitful comments .this work has been supported by tubitak ( turkish agency ) under the research project number 108t013 .jaynes , _ information theory and statistical mechanics _ , phys* 106 * ( 1957 ) 620 ; _ information theory and statistical mechanics .ii _ , phys . rev .* 108 * ( 1957 ) 171 .shannon c.e . , _ a mathematical theory of communication _ , bell syst .j. * 27 * ( 1948 ) 379 .frank t.d .& daffertshofer a. , _ exact time - dependent of the rnyi fokker planck equation and the fokker planck equations related to the entropies proposed by sharma and mittal _ , physica a * 285 * ( 2000 ) 351 .g. kaniadakis , m. lissia and a.m. scarfone , _ two - parameter deformations of logarithm , exponential , and entropy : a consistent framework for generalized statistical mechanics _ , phys .e * 71 * ( 2005 ) 046128 .
|
the extremization of an appropriate entropic functional may yield to the probability distribution functions maximizing the respective entropic structure . this procedure is known in statistical mechanics and information theory as jaynes formalism and has been up to now a standard methodology for deriving the aforementioned distributions . however , the results of this formalism do not always coincide with the ones obtained following different approaches . in this study we analyse these inconsistencies in detail and demonstrate that jaynes formalism leads to correct results only for specific entropy definitions .
|
financial integration has increased dramatically over the past decade , especially among advanced economies .there has also been an increasing presence of foreign intermediaries in several banking systems ( including many emerging markets ) .the interconnection in the global financial system means that if one nation defaults on its sovereign debt or enters into recession thus putting some external private debt at risk , the banking systems of creditor nations face losses . for example , in october 2011 italian borrowers owed french banks $ 366 billion ( net ) . should italy be unable to finance itself , the french banking system and economy could come under significant pressure , which in turn would affect france s creditors and so on . this is referred to as financial contagion . as a result , international risk sharing and efficiency among the economies of countries have increased . from late 2009 ,fears of a sovereign debt crisis developed among fiscally conservative investors concerning some european states , with the situation becoming particularly tense in early 2010 .this included eurozone members greece , ireland and portugal and also some european union ( eu ) countries outside the area .iceland , the country which experienced the largest crisis in 2008 when its entire international banking system collapsed , has emerged less affected by the sovereign debt crisis as the government was unable to bail the banks out . in the eu , especially in countries where sovereign debt has increased sharply due to bank bailouts , a crisis of confidence has emerged with the widening of bond yield spreads and risk insurance on credit default swaps between these countries and other eu members . on the other hand ,complex networks have been able to successfully describe the topological properties and characteristics of many real - life systems . moreover , there are studies which focus on understanding the complex structure of stock markets and financially growing systems ; these topology , viscoelastic behavior anomalous diffusion , phase plots , phase transition , wavelet techniques , the le chatelier principle , non - equilibrium dynamics , networks , graph theory , quantum field theory and path integrals , uncertainty , and spin models .the properties of economic crisis have been previously studied by using correlation networks . however , to the best of our knowledge , use of the countries , network to analyze an economic crisis has only been examined in a few studies , such as .dias analyzed the topology of correlation networks among countries based on daily yield rates on 10-year government bonds for nineteen eu using the concept of an mst and ht for the 2007 - 2010 period and three sub - period , namely 2007 - 2008 , 2008 - 2009 and full year of 2010 .he performed a technique to associate the value of statistical reliability to the links of the msts and hts by using bootstrap technique .moreover , lee _ et al . _ investigated the crisis spreading dynamics by using the gdp and the international trade data of the countries in the 2002 - 2006 period . on the other hand , hierarchial structure of the european countries based on debts as a percentage of gdp during the 2000 - 2011 periodis not investigated .therefore , the purpose of this work is to investigate hierarchical structures of the european countries by using debt as a percentage of gross domestic product ( gdp ) of the countries as they change over a certain period of time .we study three time - window data sets , the periods of 2000 - 2004 and 2005 - 2011 and 2000 - 2011 and observe the temporal evolution of the european debt crisis by using the concept of the minimal spanning tree ( mst ) and hierarchical tree ( ht ) .the reason for studying sub - periods is due to the enlargement of the european union in 2004 .the geometrical and taxonomic information about the correlation between the elements of the set can be obtained from the mst and ht , respectively .these methods were successfully applied to analyze currency , equity and commodity markets . the notion of distance introduced in these applicationsis based on the pearson correlation coefficient as a function to measure the similarity between two time series .we also used the bootstrap technique to quantify the statistical reliability of hierarchical trees .finally , we applied average linkage cluster analysis ( alca ) to clearly observe the clusters of countries .the mst , ht and alca give a useful guide to define the underlying economic or regional causal connections for individual countries .mantegna , and mantegna and stanley introduced the mst and ht that have been previously used to analyze currency markets , in particular to find the clustered structure of currencies and the key currency in each cluster and to resolve contagion in a currency crisis .these trees are also applied to investigate the clustering behavior of individual stocks within a single country .the mst and the ht has also been used to study world equity markets , european equity markets and commodity markets . finally , the dynamic mst analysis has also been developed and applied to investigate the time - varying behavior of stocks . . in correlation based hierarchical investigations ,the bootstrap approach has been used to quantify the statistical reliability of hierarchical trees and correlation based networks ._ and plerou _ et al . _ applied random matrix theory methods to obtain quantitative estimation of the statistical uncertainty of the correlation matrix .moreover , the bootstrap approach was used to quantify the statistical reliability of hierarchical trees and correlation based networks by tumminello _in addition , tumminello _ et al . _ investigated the statistical assessment of links in bipartite complex systems .the topology of correlation networks among 34 major currencies using the concept of an mst and ht , and bootstrap replicas of data studied by keskin _moreover , kantar __ applied the bootstrap technique to investigate the topological properties of turkey s foreign trade as well as the major international and turkish companies .finally , correlation based clustering has been used to infer the hierarchical structure of a portfolio of stocks from its correlation coefficient matrix .useful examples of correlation based networks apart from the minimal spanning tree are the planar maximally filtered graph and the average linkage minimal spanning tree .the outline of the remaining part of this paper is organized as follows .section ii introduces the methodology and the sampling procedures while section iii shows the data and section iv presents empirical results .finally , section v provides some final considerations .we describe the basics of construction of the mst , ht and alca .first , we generate a network of countries by using the mst and the ht that is obtained starting from the mst .second , we examined the statistical reliability and stability of our results by using the bootstrap technique . finally , we investigated cluster structures within the average linkage cluster analysis . since the construction of a minimal spanning tree ( mst ) and hierarchical tree ( ht ) , which is also called single linkage cluster analysis ( slca ) has been described extensively in mantegna and stanley as well as in our previous papers , therefore we shall only give a brief summary here .the correlation function between a pair of countries based on the debts of european countries in order to quantify synchronization between the countries is defined as where i and j are the numerical labels to the debts of countries and the notation means an average over time . is the vector of the time series of log - returns , is the log - return and is the quarterly debt ratio of country i at quarter t. all cross - correlations range from -1 to 1 , where -1 and + 1 mean that two countries i and j are completely anti - correlated and correlated , respectively . in the case of = 0 the countries i and j are uncorrelated .the mst is based on the idea that the correlation coefficient between a pair of countries can be transformed to a distance between them by using an appropriate function as a metric .an appropriate function for this transformation is where is a distance for a pair of the rate i and the rate _j_. now , one can construct an mst for a pair of countries using the n x n matrix of . on the other hand , to construct an ht, we used the ultrametric distance , which introduced by mantegna , or the maximal between two successive countries encountered when moving from the first country _i _ to the last country _j _ over the shortest part of the mst connecting the two countries .the distance fulfills the condition , which is a stronger condition than the usual triangular inequality . the distance is called the subdominant ultrametric distance .then , one can construct an ht by using this inequality .we also use average linkage cluster analysis ( alca ) in order to observe the different clusters of countries according to their geographical location and economic growth more clearly . since the procedures to obtain alca have been presented by tumminello et al . and kantar et al . in detail , we will not explain its construction in here .the bootstrap technique , which was invented by efron , has been widely used in phylogenetic analysis since the paper by felsenstein as a phylogenetic hierarchical tree evaluation method .this technique was used to quantify the statistical reliability of hierarchical trees and correlation based networks by tumminello et al .kantar et al .also applied the bootstrap technique to investigate the value of statistical reliability to the links on the hierarchical structures of turkey s foreign trade and major international and turkish companies . in order to quantify the statistical reliability of the links of the mst and ht ,the bootstrap technique is applied to the data .the number of replicas used in each periods is 1600 .for example , in phylogenetic analysis r = 1000 is usually considered a sufficient number of replicas . the numbers appearing in fig . 1quantify this reliability ( bootstrap value ) and they represent the fraction of replicas preserving each link in the mst .recently , this technique used to measure the reliability of the links of mst and ht . we should also mention that this technique has been well explained in refs .for the present study we chose 28 countries ( the eu27 and norway ) in europe and used the quarterly debt ratios of these countries for the years 2000 - 2011 .we used data covering the periods 01.01.2000 - 12 . 30 .2011 , 01.01.2000 - 12 . 30 . 2004 and 01.01.2005 - 12 . 30 .2011 , and listed was the countries and their corresponding symbols in table 1 .the quarterly debt ratio downloaded from the eurostat database , available online ( ) . in the next section, we will construct the msts , including the bootstrap values , and the hts from this data - set and their clustering structures .in this section , the msts , including the bootstrap values , and hts of 28 countries based on the debts of european countries over the period 2000 - 2011 , and two sub - periods , namely 2000 - 2004 and 2005 - 2011 were given .the cluster structures by using a clustering linkage procedure were also presented .the msts by using kruskal s algorithm for the country debts based on a distance - metric matrix were constructed .the amounts of the links that persist from one node ( country ) to the other corresponds to the relationship among the countries in europe .the bootstrap technique to associate a value of statistical reliability to the links of the mst were also carried out in which if the values are close to one , the statistical reliability of the link is very high .the cluster structure of the hierarchical trees were found more clearly within the alca .1a-1c show the mst , applying the method of mantegna and mantegna and stanley , for the country debts based on a distance - metric matrix for the periods of 2000 - 2011 , 2000 - 2004 and 2005 - 2011 , respectively . fig .1a is obtained by using the percentage of gross domestic product ( gdp ) of the european countries in 2000 - 2011 period . in fig .1a , we observed two different clusters of countries according to their level of debt and economic ties . the first cluster , which has more than 60 maastricht debt as a percentage of gdp of the country s debt ratio , consists of germany , united kingdom , france , spain , italy , greece , portugal , austria , ireland , belgium , malta and cyprus . in this cluster , there is a strong relationship between austria - belgium , spain - france and belgium - italy .we can establish this fact from the bootstrap value of the links among the countries , which is equal to 0.90 , 0.66 and 0.61 in a scale from zero to one , respectively .in contrast , the bootstrap value of the link between spain and germany is very low . in the second clusterare less than 60 percent to gdp of the country s debt ratio .the bootstrap value of the link between the latvia and romania is equal to 0.98 ; hence latvia and romania are strongly connected with each other .similar results were also observed in ref .the mst shown in fig .1b was obtained for the 2000 - 2004 sub - period . at the end of this sub - period , some countries , such as germany , france , italy , greece , austria , belgium , malta and cyprus , are more than 60 percent to gdp of the country s debt ratio . in the mst ,one cluster is composed of germany , france , italy , austria , belgium and cyprus which are more than 60 percent to gdp of the country s debt ratio except the poland and ireland ; hence it is a heterogeneous cluster .the bootstrap value of the link between austria and belgium is very high , namely 1.00 . the clustering behavior in fig .1c is similar to fig .1a . moreover , in fig .1c , there is a strong relationship between , spain - france , austria - belgium and belgium - ireland , which the bootstrap values of the links are equal to 0.75 , 0.70 and 0.60 , respectively .finally we notice that although ireland and poland have 29.3 and 45.7 debt as a percentage of gdp of the debt ratio at the end of 2004 , respectively , that were observed in the heterogeneous cluster of germany , france , italy , austria , belgium and cyprus , seen in fig .this results indicates that , these two countries are likely to be affected by the crisis , in which ireland and poland have 102.3 and 56 debt , respectively as a percentage of gdp of the debt ratio at the end of 2011 .the hts of the subdominant ultrametric space associated with the msts are shown in fig .2 . two countries ( lines ) link when a horizontal line is drawn between two vertical lines .the height of the horizontal line indicates the ultrametric distance at which the two countries are joined . in fig .2a , one can observe two clusters : the first cluster consists of the main countries affected by the crisis that is separated into two sub - groups .the first sub - group contains italy , belgium , austria and ireland .the distance between italy and belgium is the smallest of the sample , indicating the strong relationship between these two countries .moreover , the obtained bootstrap values of the links between these countries is equal to 0.61 which is seen in fig .the second sub - group consists of the united kingdom and spain , which is weakly connected each other and the bootstrap value is 0.39 seen in fig .the second cluster consists of countries of luxembourg , latvia , denmark and netherlands that were less affected by the crisis in the european area except for the netherland ; hence it is a heterogeneous cluster .the ht presented in fig .2b was obtained for the 2000 - 2004 sub - period . in fig .2b , we obtained two clusters without a sub - group .the first cluster contains italy , belgium and austria .the distance between austria and belgium is the smallest .the second cluster consists of the united kingdom , luxembourg and finland , which has less than 60 debt as a percentage of gdp of the debt ratio .moreover , the behavior of fig . 2c for the 2005 - 2011 sub - period is resembling with fig .2a , except the second cluster is a heterogeneous cluster because of netherland and united kingdom are observed in this cluster .in addition to these two countries , luxembourg , romania , latvia , lithuania and denmark are seen in this cluster .we also obtained more clearly cluster structure by using average linkage cluster analysis ( alca ) for all of the above periods .the obtained results were presented in figs .3a-3c . in all periods, we obtained two clusters , namely more than 60 maastricht and less than 60 maastricht . in these figures, we also observe that the austria , belgium and italy are strongly correlated with each other and formed a group in all periods .moreover , ireland joined to this group in the 2005 - 2011 period .similarly , we found that the portugal , sweden and norway also constituted a different group in all periods .we presented the hierarchical structures of countries based on the debts of european countries by using the concept of the minimal spanning tree ( mst ) , including the bootstrap values , and the hierarchical tree ( ht ) for the 2000 - 2011 period and two sub - periods , namely 2000 - 2004 and 2005 - 2011 .we obtained the clustered structures of the trees and identified different clusters of countries according to their level of debts and economic ties . from the topological structure of these trees, we found that by the debt crisis , the less and most affected eurozone s economies are formed as a cluster with each other in the trees .moreover , similar results were obtained in study using the concept of an mst and ht for nineteen eu countries by dias .we performed the bootstrap technique to associate a value of statistical reliability of the links of the mst and ht to obtain information about the statistical reliability of each link of the trees . from the results of the bootstrap technique, we can see that , in general , the bootstrap values in the mst are lowly consistent with each other .furthermore , we found more clearly the cluster structures by using average linkage cluster analysis ( alca ) for the 2000 - 2011 period and two sub - periods .finally , we hope that the present paper will help to give a better understanding of the overall structure of european debts , and also provide a valuable platform for theoretical modeling and further analysis .we are very grateful to yusuf kocakaplan for useful discussions .0 feaster s.w . , schwartz n.d . andkuntz t. `` nyt - it s all connected - a spectators guide to the euro crisis '' .new york times ( nytimes.com ) . retrieved 2012 - 05 - 14 .xaquin g.v . and mclean a. , tse archie ( 2011 - 10 - 22 ) .`` nyt - it s all connected - an overview of the euro crisis - october 2011 '' .new york times ( europe ; germany ; greece ; france ; ireland ; spain ; portugal ; italy ; great britain ; united states ; japan : nytimes.com ) . retrieved 2012 - 05 - 14 .haidar j. i. , `` sovereign credit risk in the eurozone , '' world economics , world economics , 13 , 123 - 136 , march .matlock g. ( 16 february 2010 ) .`` peripheral eurozone government bond spreads widen '' .reuters . retrieved 28 april ( 2010 ) . ray c. , ruffini g. , pallars j.m . ,fuentemilla l. and grau c. , epl * 79 * 3 ( 2007 ) 38004 .garas a. and argyrakis p. , epl * 84 * ( 2008 ) 68005 .wang w .- x ., huang l. and lai y .- c . , epl * 87 * ( 2009 ) 18006 .kantar e. , deviren b. and keskin m. , physica a * 390 * ( 2011 ) 3454 .bonanno g. , caldarelli g. , lillo f. and mantegna r.n .e * 68 * ( 2003 ) 046130 .gndz g. and gndz y. , physica a * 389 * ( 2010 ) 5776 .meerschaert m.m . and scalas e. , physica a * 370 * ( 2006 ) 114 .chiarella c. , he x.z . and hommes c. , physica a * 370 * ( 2006 ) 12 .serrano m.a ., j. stat .( 2007 ) l01002 .caetano m.a.l . andyoneyama t. , physica a * 383 * ( 2007 ) 519 .lady g.m . and quirk j.p . ,physica a * 381 * ( 2007 ) 351 .zlale . and zcan k.m . , physica a * 381 * ( 2007 ) 329 .albert r. and barabasi a.l .modern phys .* 74 * ( 2002 ) 47 .aste t. and matteo t.d . , physica a * 370 * ( 2006 ) 156 .baaquie b.e . , physica a * 370 * ( 2006 ) 98 .decamps m. , schepper a.d . and goovaerts m. , physica a * 363 * ( 2006 ) 404 .schinckus c. , physica a * 388 * ( 2009 ) 4415 .yang j.s . , chae s. , jung w.s . and moon h.t ., physica a * 363 * ( 2006 ) 377 .naylor m.j . , rose l.c . and moyle b.j . ,physica a * 382 * ( 2007 ) 199 .schiavo s. , reyes j. , fagiolo g. , quant .finance * 10 * ( 2010 ) 389 .yu l. , wang s.y . , lai k.k . andwen f.h . , neurocomputing , * 73 * ( 2010 ) 716 .lisewski a.m. and lichtarge o. , physica a * 389 * ( 2010 ) 3250 .aste t. , shaw w. and matteo t.d ., new journal of physics * 12 * ( 2010 ) 085009 .dias j. , physica a * 391 * ( 2012 ) 2046 .lee k.m . , yang j.s . , kim g. , lee j. , goh k.i . andkim i.m . , plos one , * 6 * ( 2011 ) e18443 .mantegna r. n. , eur .j. b * 11 * ( 1999 ) 193 .mantegna r. n. and stanley h. e. , _ an introduction to econophysics - correlation and complexity in finance _ cambridge university press , cambridge , ( 2000 ) .feng x. and wang x. , int . j. mod .c * 21 * ( 2010 ) 471 .keskin m. , deviren b. and kocakaplan y. , physica a * 390 * ( 2011 ) 719 .mizuno t. , takayasu h. and takayasu m. , physica a * 364 * ( 2006 ) 336 .brida j. g. , gmez d. m. and risso w. a. , expert syst .* 36 * ( 2009 ) 7721 .bonanno g. , vandewalle n. and mantegna r. n. , phys .e * 62 * ( 2000 ) r7615 .onnela j. p. , chakraborti a. , kaski k. and kertesz j , phys .e * 68 * ( 2003 ) 056110 .onnela j. p. , chakraborti a. , kaski k. and kertesz j. ,physica a * 324 * ( 2003 ) 247 .jung w. s. , chae s. , yang j. s. and moon h. t. , physica a * 361 * ( 2006 ) 263 .jung w. s. , kwon o. , wang f. , kaizoji t. , moon h. t. and stanley h.e . , physica a * 387*(2008)537 .feng x. and wang x. , int .c 21 ( 2007 ) 471 .bonanno g. , caldarelli g. , lillo f. , micciche s. , vandewalle n. and mantegna r. n. , eur .j. b * 38 * ( 2004 ) 363 .brida j. g. and risso w. a. , comput .* 35 * ( 2010 ) 85 .brida j. g. and risso w. a. , expert syst .* 37 * ( 2010 ) 3846 .coelho r. , gilmore c. g. , lucey b. m. , richmond p. and hutzler s. , physica a * 376 * ( 2007 ) 455 .gilmore c. g. , lucey b. m. and boscia m. , physica a * 387 * ( 2008 ) 6319 .sieczka p. and j. a. hoyst ,physica a * 388 * ( 2009 ) 1621 .tabak b. m. , serra t. r. and cajueiro d. o. , eur .j. b * 74 * ( 2010 ) 243 .onnela j. p. , chakraborti a. , kaski k. and kertesz j. , eur .j. b * 30 * ( 2002 ) 285 .micciche s. , bonanno g. , lillo f. and mantegna r. n. , physica a * 324 * ( 2003 ) 66 .laloux l. , cizeau p. , bouchaud j. p. and potters m. , phys .* 83 * ( 1999 ) 1467 .plerou v. , gopikrishnan p. , rosenow b. , amaral l. a. n. and stanley h. e. , phys .* 83 * ( 1999 ) 1471 .tumminello m. , lillo f. and mantegna r. n. , epl * 78 * ( 2007 ) 30006 .tumminello m. , lillo f. and mantegna r. n. , j. econ .behav . organ .* 75 * ( 2010 ) 40 .tumminello m. , coronnello c. , lillo f. , miccich s. and mantegna r.n .j. bifurcat .chaos * 17 * ( 2007 ) 2319 .tumminello m. , micciche s. , lillo f. , piilo j. and mantegna r. n. , plos one * 6 * ( 2011 ) e17994 .kantar e. , deviren b. and keskin m. , eur .j. b * 84 * ( 2011 ) 339 .bonanno g. , lillo f. and mantegna r. n. , quant .finance * 1 * ( 2001 ) 96 .tumminello m. , aste t. , matteo t.d . andmantegna r. n. , proceedings of the national academy of sciences of the united states of america * 102 * ( 2005 ) 10421 .west d. b. , _ introduction to graph theory _ , prentice - hall , englewood cliffs , nj , ( 1996 ) .kruskal j. b. , proc .* 7 * ( 1956 ) 48 .cormen t. h. , leiserson c. e. and rivest r. l. , introduction to algorithms .mit press , cambiridge , ma ( 1990 ) .prim r. c. , bell system techn . j. * 36 * ( 1957 ) 1389 .everitt b. s. , cluster analysis , _ heinemann educational books _ , london , ( 1974 ) .rammal r. , toulouse g. and virasoro m.a .* 58 * ( 1986 ) 765 .benzecri j. p. , lanalyse des donnes 1 , _la taxinomie _ ,dunod , paris , ( 1984 ) .situngkir , h. nlin.ps/0405005 .efron b. , ann .* 7 * ( 1979 ) 1 .felsenstein j. , evolution * 39 * ( 1985 ) 783 .efron b. , halloran e. and holmes s. , proc .usa * 93 * ( 1996 ) 13429 . *( color online ) minimal spanning tree associated to quarterly data of the 28 countries in europe during the 2000 - 2011 period . * ( color online ) same as fig .1(a ) , but for debts as a percentage of gdp during the 2000 - 2004 period . *( color online ) same as fig .1(a ) , but for debts as a percentage of gdp during the 2005 - 2011 period . *( color online ) the ht associated with quarterly data of the 28 countries in europe during the 2000 - 2011 period . * ( color online ) same as fig .2(a ) , but for debts as a percentage of gdp during the 2000 - 2004 period . *( color online ) same as fig .2(a ) , but for debts as a percentage of gdp during the 2005 - 2011 period . *( color online ) average linkage cluster analysis .hierarchical tree associated to a system of the 28 countries in europe during the 2000 - 2011 period . *( color online ) same as fig .3(a ) , but for debts as a percentage of gdp during the 2000 - 2004 period . *( color online ) same as fig .3(a ) , but for debts as a percentage of gdp during the 2005 - 2011 period .
|
we investigate hierarchical structures of the european countries by using debt as a percentage of gross domestic product ( gdp ) of the countries as they change over a certain period of time . we obtain the topological properties among the countries based on debt as a percentage of gdp of european countries over the period 2000 - 2011 by using the concept of hierarchical structure methods ( minimal spanning tree , ( mst ) and hierarchical tree , ( ht ) ) . this period is also divided into two sub - periods related to 2004 enlargement of the european union , namely 2000 - 2004 and 2005 - 2011 , in order to test various time - window and observe the temporal evolution . the bootstrap techniques is applied to see a value of statistical reliability of the links of the msts and hts . the clustering linkage procedure is also used to observe the cluster structure more clearly . from the structural topologies of these trees , we identify different clusters of countries according to their level of debts and economic ties . our results show that by the debt crisis , the less and most affected eurozone s economies are formed as a cluster with each other in the msts and hierarchical trees .
|
the possibility of creating physical systems with identical properties is crucial for any physical theory that is verifiable by experiments .comparison of preparators a procedure of determining whether they prepare the same objects or not is one of the basic experiments we would like to do when testing a theory , because it allows us to operationally define equivalence of such devices for their further use . in the framework of classical physics , we can in principle measure and determine the state of the system perfectly without disturbing it .thus , to compare states of two systems it suffices to measure each system separately .however , in quantum theory , due to its statistical nature , we can not make deterministic conclusions / predictions even for the simplest experimental situations .therefore , the comparison of quantum states is different compared to the classical situation .imagine we are given two independently prepared quantum systems of the same physical nature ( e.g. , two photons or two electrons ) .we would like to determine unambiguously whether the ( internal ) states of these two systems are the same or not .if we have just a single copy of each of the states and we possess no further information about the preparation then a measurement performed on each system separately can not determine the states precisely enough to allow an error - free comparison . in this case , also all other strategies would fail , because our knowledge about the states is insufficient , e.g. , if each of the systems can be in an arbitrary mixed state , then it is impossible to unambiguously test whether the states are equal or not . however , there are often situations in which we have some additional _ a priori _ information on the states we want to compare .for example , we might know that each system has been prepared in a pure state .this kind of scenario has been considered in ref . for two qudits and in ref . for the comparison of a larger number of systems .thereafter , the comparison of coherent states and its application to quantum cryptography has been addressed in ref .et al . _ analyzed the comparison with more copies of the two systems and proposed an optimal comparator for coherent states , which , on this subset , outperforms the optimal universal comparator working for all pure states . in the present paperwe analyze the unambiguous quantum state comparison ( usc ) of two unknown squeezed vacuum states , that is , we would like to unambiguously determine whether two unknown squeezed - vacuum states are the same or not .the conclusion has to be drawn from a procedure using only a single copy of the states . at the end of the procedure , using only the outcome of the measurement we have to decide whether the two states given to us have been the same , different , or that we do nt know which of the former conclusions is true .we strive to find an optimal procedure , i.e. , one maximizing the probability of correctly judging the equivalence of the compared squeezed states .our proposal relies on the interference of two squeezed states at a beam splitter and on the subsequent measurement of the difference between the number of detected photons at the two output ports . in ref . , unambiguous comparison of coherent states has been considered in detail and a short remark is devoted to the comparison of squeezed vacua . in the setup of ref . , after interference at a beam splitter , one needs to measure the parity of the detected number of photons : a detection of an odd number of photons indicates the difference between the inputs . as a consequence , the quantum efficiency of the detectors is a critical parameter and plays a crucial role in the robustness of the scheme . as we will show , this problem is less relevant in our case , since our setup requires the measurement of the difference of the detected number of photons .our configuration also allows us to prove optimality of our setup .the plan of the paper is as follows . in section [s : setup ] we introduce our scheme to compare two squeezed vacuum states , whereas the proof of the optimality of the setup is given in section [ s : opt ] .the performances of our scheme , also in the presence of imperfections at the detection stage , are investigated in section [ s : perform ] , together with its reliability in the presence of noise .section [ s : concl ] closes the paper with concluding remarks .our goal is the comparison of two squeezed vacuum states and , where ] are the matrix elements of the squeezing operator , whose analytical expressions are given e.g. , in ref. .if and , then and , as one can see from eq .( [ twb : fock ] ) .thus , the probability , for , can be non - zero only if , that is only if the input states are different . in the ideal case ( unit quantum efficiency of the detectors )the measurement apparatus we want to use gives two possible outcomes : zero or non - zero photon - counting difference .thus , the povm performed is defined by the effects and , corresponding to the `` zero '' and `` non - zero '' photon - counting events , respectively , given by : the occurrence of the `` '' event implies that the incident squeezed - vacuum states could not have been identical [ see eqs .( [ eq : psioutreal ] ) and ( [ final : dpc ] ) ] .the occurrence of the `` '' event , on the other hand , implies nothing , as each possible pair of squeezed - vacuum states leads to a non - zero overlap with any of the states .thus , event `` '' unambiguously indicates the difference of the compared squeezed states , whereas `` '' is an inconclusive outcome .in this section we prove optimality of the proposed setup for two situations : ( i ) the restricted scenario in which the squeezing phases of the compared states are unknown but equal for both states ( ii ) for the general situation , when no assumption on the squeezing phases is taken .we first tackle the former scenario considered in most of the paper and at the end of the proof we comment on the differences in proving ( ii ) .let us denote by the set of squeezed states from which we randomly chose the states to be compared .we also define the sets , , composed by pairs of identical and different three outcomes ( `` same '' , `` different '' and `` do nt know '' ) described by the povm and we optimize the overall probability : where and are the a priori probability of being different or the same , , are probability densities of choosing from , , respectively .we also impose the no - error constraints : [ noerrorcond1 ] which guarantee the unambiguity of the results . from the mathematical point of view , the constraints ( [ noerrorcond1 ] ) restrict the support of the operators and .the fact that the possible states in form a continuous subset of pure states , is responsible for the impossibility to unambiguously confirm that the compared states are identical. the proof of this statement can be found in appendix [ app : nosame ] and essentially states that , due to the no - error conditions ( [ noerrorcond1 ] ) , we must have .thus , the measurement actually has only two outcomes , the effective povm is given by , and it is clear that increasing the eigenvalues of without changing its support increases the figure of merit and leaves the no - error conditions satisfied .this is true independently of the distribution and thus the optimal measurement is formed by being a projector onto the biggest support allowed by the no - error condition ( [ noerrorcond1 ] ) and being a projector onto the orthocomplement .moreover , the quantity that completely characterizes the behavior of the squeezed - states comparator is , i.e. , the conditional probability of obtaining the outcome if different squeezed states ( ) are sent to the comparator .it is worth to note that in what follows one does not need to know the actual value of .summarizing , in order to find an optimal comparator of squeezed states from we need to refine the definition of the largest allowed support of hidden in the no - error condition ( [ noerrorcond1:b ] ) . to do this we equivalently rewrite eq .( [ noerrorcond1:b ] ) as : which , by denoting and choosing to be the unitary transformation performed by the proposed setup from fig .[ f : schemes ] ( a ) , becomes : the optimality of the proposed setup is proved by showing that the biggest support allowed by the previous condition coincides with the support of the projective measurement we use , see eq .( [ povm : comp ] ) . from the expression of , eq .( [ twb : fock ] ) , it is clear that for any operator with the support orthogonal to the span of , with , the unambiguous no - error condition ( [ noerrorcond2 ] ) holds .hence , if any such operator is a part of a povm , then the emergence of the outcome related to it unambiguously indicates the difference of the squeezing parameters .we now proceed to show that the support of such can not be further enlarged .now let us assume that a vector that a vector with at least one non - zero coefficient is in the support of . as a consequence of the required no - error condition ( [ noerrorcond2 ] ) the overlap has to be vanishing for all values of .( [ overlap1 ] ) is vanishing if and only if vanishes for all .the sum on the right - hand side of eq .( [ overlapmultiplied ] ) can be seen as a polynomial in and should vanish for all possible values of , i.e. for all .polynomials of this type on a finite interval form a vector space with linearly independent basis vectors , with .thus the sum in eq .( [ overlapmultiplied ] ) vanishes only if , .this is in contradiction with our assumption about the vector and therefore the largest support an operator , unambiguously indicating the difference of the squeezing parameters , can have is the orthocomplement of the span of vectors , with .this concludes the proof . in the case( ii ) of compared states with completely arbitrary phases of the complex squeezing parameters , the proof can be done in the same way as before , up to defining accordingly the set of pairs of same / different states .in this section we give a thorough analysis of the statistics of our setup also in the presence of non - unit quantum efficiency at the detection stage in order to assess its reliability in section [ s : reliab ] .the conditional probability of revealing the difference of compared states with [ but , though unknown ] , that is the probability to obtain a outcome , reads : with : \sum_{n , m=0}^{\infty}[\lambda(r_{+})]^{n}[\lambda^{*}(r_{+})]^{m } \nonumber\\ & \hspace{0.5cm}\times \sum_{k=0}^{\infty } \big\{[s(r_{-})]_{kn}\big\}^2\ , \big\{[s^{\dag}(r_{-})]_{mk}\big\}^2,\end{aligned}\ ] ] where is given in eq .( [ eq : psioutreal ] ) .for we correctly obtain . by noting that : {hk } \propto \left\{\begin{array}{ll } \exp\{i(\frac{h - k}{2})\theta\ } & \hbox{for } h , k \hbox { odd or even},\\ 0 & \hbox{otherwise } , \end{array}\right.\end{aligned}\ ] ] where , it is straightforward to see that eq .( [ p : inc ] ) does not depend on the ( equal ) phase of and .thus , in order to investigate the performances of the optimal squeezed - states comparator , we may set and let and , with , without loss of generality .furthermore , it is possible to show by numerical means that the probability does not depend on the sum of the squeezing parameter , but only on the difference latexmath:[ ] are the matrix elements of the squeezing operator as in eq .( [ final : dpc ] ) . because of eq .( [ sq : elem : phase ] ) , the probabilities ( [ p : d : eta ] ) and ( [ p : inc : eta ] ) are still independent of the unknown value of , thus , from now on , we set and put and , with , without loss of generality .( solid ) and ( dot - dashed lines ) as functions of for different values of the efficiency ; from top to bottom ( solid ) and from bottom to top ( dot - dashed lines ) : ( red ) , ( green ) , ( blue ) , ( magenta ) .[ f : p10eta ] ] in fig .[ f : p10eta ] we plot and for different values of .if , then eq . ( [ p : xi : xi ] ) can be expanded up to the second order in , obtaining : in order to assess the reliability of our setup , we address the scenario in which only two squeezing parameters for each of the squeezed vacua are possible . in such caseone knows that the two squeezing parameters are either or with the same prior probability .our squeezed - states comparator may not be optimal in this case .however , as one can see in fig .[ f : optcomp ] , the performance of our setup is nearly as good as if it was optimized also for this restricted scenario . in particular , the dashed line in fig .[ f : optcomp ] refers to the optimal measurement , unambiguously detecting the difference in the case of only two possible squeezing parameters , in formula : ( reliability ) as a function of for fixed and different values of the efficiency .bottom : reliability as a function of for difference and different values of the efficiency . in both plots , from top to bottom : ( red ) , ( green ) , ( blue ) , ( magenta ) .[ f : rd],title="fig : " ] ( reliability ) as a function of for fixed and different values of the efficiency .bottom : reliability as a function of for difference and different values of the efficiency . in both plots , from top to bottom : ( red ) , ( green ) , ( blue ) , ( magenta ) .[ f : rd],title="fig : " ] we define the reliability of the scheme in revealing the difference of the squeezing parameters and as the conditional probability of the two squeezed vacuum states being different if the outcome is found , i.e. , ( we assume equal prior probabilities ) : in the ideal case , i.e. , , we have and , thus , , which is guaranteed by the construction of the setup . on the other hand ,if , then and , consequently , the conclusion based on the outcome is not unambiguous anymore .the actual value of can be numerically calculated starting from eq.s ( [ p : d : eta ] ) and ( [ p : inc : eta ] ) .the reliability is plotted in the upper panel of fig . [f : rd ] as a function of .note that differently from the case , for the probability depends not only on the difference but also on the sum .the dependence on is shown in the the lower panel of fig .[ f : rd ] , where we plot as a function of for fixed difference .in this paper we have addressed the comparison of two squeezed vacuum states of which we have a single copy available .we have suggested an optical setup based on a beam splitter , a phase shifter and two photodetectors which is feasible with the current technology .even though we analyzed the scenario with an equal , though unknown , phase of the compared states , our setup is able to operate unambiguously with ideal detectors irrespective of the squeezing phases , and without the knowledge of the relative phases of the squeezed states . we have proved the optimality of our scheme for arbitrary phases and ideal detectors and we analyzed its performance and reliability also in the presence of non - unit quantum efficiency at the detection stage in the case of equal phases .as one may expect , the detection efficiency strongly affects the reliability ; nevertheless we have shown that , for small energies and not too low quantum efficiency , the setup is still robust . our scheme may be employed not only for the comparison of two squeezed vacua , but for a more general scenario in which the input states and are known to be transformed by two _ fixed known _ local unitaries and , respectively ( namely , ) or by any _ fixed known _ global unitary transformation ( ) : now it is enough to apply the inverse of the transformation before processing the state with our setup .fruitful discussions with m. ziman are acknowledged .this work has been supported by the project inquest apvv sk - it-0007 - 08 within the `` executive programme of scientific and technological co - operation between italy and slovakia '' , by the european union projects q - essence 248095 , hip 221889 , and partially supported by the cnr - cnism agreement .in this appendix we show the equivalence between the schemes in fig .[ f : schemes ] ( a ) and [ f : schemes ] ( b ) . since the squeezed states are gaussian states and all operations involved in the schemes ( phase shift and beam splitter mixing ) preserve the gaussian character , we use the phase - space description of the system evolution . for the sake of simplicity we focus on the case of real squeezing parameters , i.e. , and , with .the symplectic transformation associated with the squeezing operator is : while the symplectic transformation associated with the balanced beam splitter operator is : where is a identity matrix . the covariance matrix of the outgoing gaussian state in the scheme fig .[ f : schemes ] ( a ) [ for the sake of simplicity we used and we do not write explicitly the symplectic transformation of the phase shift ] : is , thus , given by : where , represents the two local squeezing operations .the explicit form of ( [ sigma : ev ] ) reads : where : note that by setting one obtains the covariance matrix of the twb in eq .( [ twb ] ) .it is now straightforward to verify that the same result of the evolution as in fig .[ f : schemes ] ( a ) , corresponding to the covariance matrix in eq .( [ sigma : out ] ) , may be obtained considering the setup displayed in fig .[ f : schemes ] ( b ) . heretwo input states with same squeezing parameter amplitude are mixed after a phase shift at the bs and the outgoing modes undergo two local squeezing operations with amplitude ; in formula : where is the symplectic transformation associated with defined in eq . ( [ twb ] ) . by performing the calculation one finds , and , since gaussian states are completely characterized by their covariance matrix ( and first moments ), one can conclude that the final states are the same .in this appendix we show that the no - error condition given in eq .( [ noerrorcond1:b ] ) , together with continuity of the involved mappings , imply that we can not unambiguously detect the sameness of two states .let us consider a state with .the no - error condition ( [ noerrorcond1:b ] ) demand that : let us now take the limit .thanks to continuity of the trace and the chosen parameterization of the set of states , we conclude that : it follows that eq . ( [ app : noerror1 ] )has to hold for arbitrary and .since is a positive operator , it should be zero on the relevant part of the hilbert space spanned by , i.e. , all the possible pairs of the compared states .hence , without loss of generality , we can choose on the whole hilbert space .30 i. jex , e. andersson and a. chefles , j. mod . opt . * 51 * , 505 ( 2004 ) .s. m. barnett , a. chefles and i. jex , phys .a * 307 * 189 , ( 2003 ) .a. chefles , e. andersson and i. jex , j. phys .a : math . gen . *37 * , 7315 ( 2004 ) .e. andersson , m. curty and i. jex , phys .a * 74 * , 022304 ( 2006 ) .m. sedlk , m. ziman , v. buek and m. hillery , phys .a * 77 * , 042304 ( 2008 ) .r. loudon and p. l. knight , j. mod . opt . * 34 * , 709 ( 1987 ) .m. g. a. paris , phys .a * 225 * , 28 ( 1997 ) .p. marian , phys .a * 44 * , 3325 ( 1991 ) .p. l. kelley , and w. h. kleiner , phys .136 * , 316 ( 1964 ) .w. h. louisell , _ quantum statistical properties of radiation _ , ( wiley , 1973 ) .a. allevi , a. andreoni , m. bondani , m. g. genoni and s. olivares , phys .a * 82 * , 013816 ( 2010 ) .s. croke , e. andersson , s. m. barnett , c. r. gilson , and j. jeffers , phys .* 96 * , 070401 ( 2006 ) . c. s. hamilton , h. lavika , e. andersson , j. jeffers , and i. jex , phys .a * 79 * , 023808 ( 2009 ) .m. kleinmann , h. kampermann , d. bruss , phys .rev . a * 72 * , 032308 ( 2005 ) .a. ferraro , s. olivares and m. g. a. paris , _ gaussian states in quantum information _( bibliopolis , napoli , 2005 ) , e - print quant - ph/0503237 .
|
we propose a scheme for unambiguous state comparison ( usc ) of two unknown squeezed vacuum states of an electromagnetic field . our setup is based on linear optical elements and photon - number detectors , and achieves optimal usc in an ideal case of unit quantum efficiency . in realistic conditions , i.e. , for non - unit quantum efficiency of photodetectors , we evaluate the probability of getting an ambiguous result as well as the reliability of the scheme , thus showing its robustness in comparison to previous proposals .
|
the fukushima dai - ichi nuclear power plant accident , which began in march 2011 , released a significant amount of radioactive substances , contaminating fukushima and surrounding prefectures .it is therefore essential to clarify the extent of this fallout and to assess its impact on the environment , foodstuffs , and on the residents in the affected areas . in fukushima prefecture ,various studies of external as well as internal exposures have been conducted since 2011 . particularly important in assessing the effect of radiation on the residentsis to conduct personal dosimetry : one of the earliest reports was by yoshida et al . , who measured the individual doses of the medical staff dispatched from nagasaki to fukushima city from march to july 2011 .they reported that the personal dose equivalent (10 ) ranged from 0.08 to 1.63 / h , significantly lower than the ambient dose equivalent rate (10 ) recorded by a monitoring station in fukushima city which ranged from 0.86 to 12.34 / h . large - scale individual dose monitorings have been conducted by most municipalities in fukushima prefecture since 2011 .for example , fukushima city started to distribute radio - photoluminescence glass dosimeters ( glass badge^^ ) to school children and pregnant women in the fall of 2011 , and the monitorings have been repeated every year .the percentage of the subjects whose measured `` additional '' dose was below 1 msv / y was 51% in 2011 , 89% in 2012 , and 93% in 2013 . in 2014 , 95.57% of the 46,436 subjects were found to be below 1 msv / y , having a half life of 2 years , and also to the decontamination efforts . ] .such individual dose monitoring using passive dosimeters report a cumulative dose over a period of time , typically three months , to the participant ; it is not possible to tell when and where the major contribution to the cumulative dose was received . in the present study, we therefore used active ( solid - state ) personal dosimeters called `` d - shuttle '' , which can record the integrated dose for each hour ( hourly dose ) .the d - shuttles had already been used successfully in some studies .for example , hayano et al . demonstrated the effectiveness of using d - shuttles to communicate the exposure situation to residents , and naito et al . used d - shuttles together with global - positioning system ( gps ) receivers to compare individual versus ambient dose equivalent rates . in the present study ,216 high - school students and teachers wore d - shuttles and kept journals of their behaviour for two weeks in 2014 , and the external individual doses thus obtained were compared across the regions .this study was motivated and initiated by the high - school students living in fukushima who wished to compare their own individual doses with those of people living in other parts of japan , and also in other countries .the measurements were carried out by high - school students and teachers from twelve japanese high schools ( six in fukushima prefecture , see fig .[ fig : fukushima ] , and six outside of fukushima , see fig .[ fig : outoffukushima ] ) , four high schools ( three regions ) in france ( fig .[ fig : france_background ] ) , eight high schools ( seven regions ) in poland ( fig . [ fig : poland_background ] ) and two high schools in belarus ( fig .[ fig : belarus ] ) .the total number of participants was 216 , and the measurement period was two weeks during the school term in each country . .the six japanese schools outside fukushima prefecture were chosen by consulting the `` geological map of japan '' ( fig . [ fig : outoffukushima ] , a natural radiation level map published by the geological society of japan ) .fukuyama ( labelled 1 . in fig .[ fig : outoffukushima ] ) , tajimi ( 4 . ) and ena ( 5 . )are in the region where the natural terrestrial background radiation level is relatively high , while nada ( 2 . ) , nara ( 3 . ) and kanagawa ( 6 . )are in the low - background region .the natural radiation level map of japan ( in ngy / h ) calculated from the chemical analyses of the soil samples by adding contributions from uranium , thorium and potassium-40 .the map was adopted from ref .note that the colour coding schemes are different between this figure and that in fig [ fig : fukushima ] . ]the locations of the participating schools in france , and their nearby air dose rates ( obtained from the irsn ambient dose monitor ) ] the locations of the participating schools in poland , together with the average air dose rate of the county in which each school is located ( obtained from ref . ) ] the ambient dose equivalent rates in belarus ( april 2015 ) , obtained from ref . , together with the locations of the participating schools . ] in fukushima prefecture , based on the airborne dose - rate monitoring map ( fig .[ fig : fukushima ] ) , schools in major cities , fukushima , nihonmatsu , koriyama , iwaki , aizu were selected .note that some 100,000 people were forced to evacuate from the restricted zone ( indicated by the white border in fig .[ fig : fukushima ] ) , who , after four years of the accident , can not yet return to their homes . as such , the present study does * not * include high schools in the restricted zones .when choosing the participants from fukushima prefecture high schools , care were taken so as to choose students living in various areas , house types ( wooden vs concrete ) , and in their extracurricular activities . in france and poland ,the high schools involved in the study participated on a voluntary basis without specific selection . in france , the four high schools are located in three different regions characterized by a range of natural terrestrial radiation background level , the lowest level being observed in boulogne ( closed to paris ) while higher value is observed in corsica ( see figure [ fig : france_background ] ) . in poland, the location of the schools is also ranging from lower values for high schools in the region of warszawa to the highest in zabrze ( see figure [ fig : poland_background ] ) .two high schools from belarus were involved due to their location in the gomel region , impacted by the fallout of the chernobyl accident .the first high school is located in gomel city while the second one is located close to the exclusion zone ( in bragin district ) and thus characterized by a higher ambient radiation dose rate ( see figure [ fig : belarus ] ) .the individual dose - meter , called `` d - shuttle '' ( fig .[ fig : dshuttle ] ) , developed jointly by the national institute of advanced industrial science and technology ( aist ) and chiyoda technol corporation is a light ( 23 g ) and compact ( 68 mm ( h ) 32 mm ( w ) 14 mm ( d ) ) device , based on a mm silicon sensor , and is capable of logging the integrated dose every hour in an internal memory with time stamps .the memory can be later read out by using a computer interface . by comparing the data with the activity journal kept by the participant, we can analyse the relationship between the personal dose and behaviour ( when , where , and what ) of the participant .each d - shuttle was calibrated with a calibration source for (10 ) , in accordance with the international organization for standardization ( iso 4037 - 3 ) .the relative response is 30% from 60 kev to 1.25 mev , and its least detectable value was 0.01 / h .the (10 ) measured with a personal dosimeter such as d - shuttle is known not to underestimate the effective dose ( i.e. , ) even in cases of lateral or isotropic radiation incidence on the body , as in the present study .although the d - shuttle is well shielded against external electromagnetic noise and is protected against mechanical vibration , occasional spurious `` hits '' are unavoidable .these typically show up as an isolated large `` spike '' in the readout data . in such cases, we checked the activity journal and queried the participant to determine whether or not the `` spike '' was likely caused by radiation , as will be discussed in detail in section [ sec : outliers ] .participants were instructed to always wear the d - shuttle on his / her chest , except during sleep when the unit was left near the bedside . in table [ tab : rawdata ] , typical data read out from the d - shuttle are shown together with a part of the activity journal .the number of data points per participant was 24 h d = 336 . as there were some participants who could not take part in the measurements for the full 14 days , the total number of data points for the 216 people were 70,879 .the d - shuttle records both natural radiation and the radiation due to the accident ( ) ; the latter contribution is negligible except in fukushima prefecture and in belarus . when comparing the individual doses across the regions , we used the recorded doses by the d - shuttle including both doses from natural background radiation and radiation from radiocaesiums , since these two contribute inseparably and additively to the individual doses .in this way , in addition to using the same device and standardising the measurement protocol , it is possible to compare the individual doses across all the participating regions ..[tab : rawdata ] a typical example of a d - shuttle data . [ cols="^,^,^",options="header " , ]show that the personal doses for the high school students from fukushima prefecture were not significantly higher than in other regions and countries .this may seem surprising since there are still contributions from radioactive caesium in fukushima , as airborne monitoring and other measurements clearly show . in order to better understand the situation , we show in fig .[ fig : estimateddose ] the estimated air kerma for the 12 participating schools in japan , using the database of chemical analysis of soil samples .we selected the soil - sample data within 5 km of participating schools from the database ( when there were multiple sampling points , we took their average ) , and used the equation to estimate the air kerma ( ngy / h ) , with being the concentration ( in % ) , being the uranium concentration ( in ppm ) and being the thorium concentration ( in ppm ) . the relation between the air kerma and the effective dose varies depending on the irradiation angle to the body ( c.f . , icrp publication 74 ( 1996) , figure 9 ) . as people almost evenly receive terrestrial background radiation from all sides ,rotational irradiation geometry is adequate for the relation .although effective dose per unit air kerma ( sv / gy ) at rotational geometry is around 0.9 in the energy region of terrestrial background radiation ( c.f . , ibid .two - dot chain line of figure 8) , it can be treated as unity without losing the rationality of our argument , since the difference lies within the uncertainty of the measurement .as shown , outside of fukushima , the estimated effective dose rates from the terrestrial radiation air kerma rates and measured individual dose rates are correlated and have similar values .however , in fukushima , the individual dose rates are higher than the estimated effective dose rates of terrestrial radiation .in fact , the terrestrial radiation background is low in fukushima ; the radiation due to the distributed radio - caesium was added on top of the terrestrial radiation , but that increment is not high as might be expected .thus , although the dose rate in most of fukushima prefecture was elevated by the nuclear accident , the total external individual dose rates observed for the fukushima high school students involved in this study are not significantly different from those in other regions .the natural radiation levels vary from region to region . in japan, the nation - wide average of the terrestrial gamma - ray contribution to the effective dose is evaluated to be 0.33 msv / y , lower than the world average of 0.48 msv / y . in france , the average value is 0.47 msv / y , similar to the world average but with variation from a factor 5 according the regions , ranging from about 0.2 to 1 msv / y . in the present study ,the d - shuttle measured the sum of the natural radiation dose and the additional dose due to the nuclear accident , if any was detectable .nevertheless , in fukushima as well as in belarus , the individual annual dose _ including natural radiation _ was below 1 msv / y for most of the participating high - school students .it is interesting to mention that icrp stated in publication 111 that `` past experience has demonstrated that a typical value used for constraining the optimisation process in long - term post - accident situations is 1 msv / year '' .twelve high schools in japan ( of which six are in fukushima prefecture ) , four in france , eight in poland and two in belarus cooperated in the measurement and comparison of individual external doses in 2014 . in total 216 high - school students and teachers participated in the study .each participant wore an electronic personal dosimeter `` d - shuttle '' for two weeks , and kept a journal of his / her whereabouts and activities .the median annual doses were estimated to be 0.63 - 0.97 msv / y in fukushima prefecture , 0.55 - 0.87 msv / y outside of fukushima in japan , 0.51 - 1.10 msv / y in europe ( 0.09 in belarus ) , thus demonstrating that the individual external doses currently received by participants in fukushima and belarus are well within the terrestrial background radiation levels of other regions / countries .the present study also demonstrated that the measurement of individual dose rates together with the use of activity journals is a powerful tool for understanding the causes of external exposures , and can be useful and clearly understandable tool for risk communication for people living in contaminated areas .the stacked bars show the estimated natural radiation level ( ngy / h ) around the participating schools , based on the soil - sample chemical analysis database ( n.b .since those soil samples were collected from riverbeds , the estimated radiation levels may not necessarily coincide with the typical values in residential areas ) .the individual hourly dose ( nsv / h ) distribution measured at each school is indicated by the box diagrams ( same as in fig .[ fig : comparison1 ] ) . ]this study was conceived by students of fukushima high school , and was designed by hara t. , niwa o. , miyazaki m. , tada j. , schneider t. , charron s. , and hayano r. data were analysed by onodera h. , kiya m. , suzuki k. , suzuki r. , and saito m. onodera h. , suzuki r. , saito m. , anzai s. , and fujiwara y. wrote the manuscript in japanese , and hayano r. , niwa o. , schneider t. , and tada j. finalised the english version .technical details regarding the d - shuttle were provided by ohguchi h. all the students and teachers in the author list participated in the measurement , and contributed comments on the data analysis and the manuscript .this work was in part supported by the super science high school ( ssh ) programme of japan science and technology agency ( jst ) .hayano , m. tsubokura , m. miyazaki , h. satou , k. sato , s. masaki , and y. sakuma , internal radiocesium contamination of adults and children in fukushima 7 to 20 months after the fukushima npp accident as measured by extensive whole - body- counter surveys ,proceedings of the japan academy , series b 89 , 157 - 163 ( 2013 ) .kohji yoshida , kanami hashiguchi , yasuyuki taira , naoki matsuda , shunichi yamashita , and noboru takamura , `` importance of personal dose equivalent evaluation in fukushima in overcoming social panic '' , radiation protection dosimetry 151 ( 2012 ) , 10.1093/rpd / ncr466 .hayano r s and miyazaki m 2014 internal and external doses in fukushima : measuring and communicating personal doses fbnews no.447 . 15 ( in japanese ) http://www.c-technol.co.jp/cms/wp-content/uploads/2014/04/447fbn.pdf w. naito , m. uesaka , c. yamada , and h. ishii , `` evaluation of dose from external irradiation for individuals living in areas affected by the fukushima daiichi nuclear plant accident '' , radiation protection dosimetry 163 ( 2015 ) , 10.1093/rpd / ncu201 .isajenko k , piotrowska b , fujak m and karda m 2011 radiation atlas of poland , central laboratory for radiological protection http://www.gios.gov.pl/images/dokumenty/pms/monitoring_promieniowania_jonizujscego/atlas_radiologiczny_polski_2011.pdf .iso 4037 - 3 : `` x and gamma reference radiation for calibrating dosemeters and doserate meters and for determining their response as a function of photon energy - part3 : calibration of area and personal dosemeters and the measurement of their response as a function of energy and angle of incidence '' .icrp , 2009 .application of the commission s recommendations to the protection of people living in long - term contaminated areas after a nuclear accident or a radiation emergency .icrp publication 111 .icrp 39 ( 3 ) .
|
* abstract * twelve high schools in japan ( of which six are in fukushima prefecture ) , four in france , eight in poland and two in belarus cooperated in the measurement and comparison of individual external doses in 2014 . in total 216 high - school students and teachers participated in the study . each participant wore an electronic personal dosimeter `` d - shuttle '' for two weeks , and kept a journal of his / her whereabouts and activities . the distributions of annual external doses estimated for each region overlap with each other , demonstrating that the personal external individual doses in locations where residence is currently allowed in fukushima prefecture and in belarus are well within the range of estimated annual doses due to the terrestrial background radiation level of other regions / countries .
|
apache lucene is a high - performance and full - featured text search engine library written entirely in java .it is a technology suitable for nearly any application that requires full - text search .lucene is scalable and offers high - performance indexing , and has become one of the most used search engine libraries in both academia and industry .lucene ranking function , the core of any search engine applied to determine how relevant a document is to a given query , is built on a combination of the vector space model ( vsm ) and the boolean model of information retrieval .the main idea behind lucene approach is the more times a query term appears in a document relative to the number of times the term appears in the whole collection , the more relevant that document will be to the query .lucene uses also the boolean model to first narrow down the documents that need to be scored based on the use of boolean logic in the query specification . in this paper ,the implementation of bm25 probabilistic model and its extension for semi - structured ir , bm25f , is described in detail .one of the main lucene s constraints to be widely used by ir community is the lack of different retrieval models implementations .our goal with this work is to offer to ir community a more advanced ranking model which can be compared with other ir software , like terrier , lemur , clairlib or xapian .there exists previous implementations of alternative information retrieval models for lucene .the most representative case of that is the language model implementation from intelligent systems lab amsterdam .another example is described at where lucene is compared with juru system . in this case lucene document length normalizationis changed in order to improve the lucene ranking function performance .bm25 has been widely use by ir researchers and engineers to improve search engine relevance , so from our point of view , a bm25/bm25f implementation for lucene becomes necessary to make lucene more popular for ir community .the developed models are based in the information that can be found at .more specifically the implemented ranking functions are as next : where is the term frequency of in ; is the document length ; is the document average length along the collection ; is a free parameter usually chosen as 2 and < < $ top.scoredocs.length ; i++ ) { system.out.println(docs[i].doc + " : " + docs[i].score ) ; } ....authors want to thank hugo zaragoza for his review and comments .http://lucene.apache.org / java / docs/. stephen robertson , hugo zaragoza _ the probabilistic relevance model : bm25 and beyond_. the 30th annual international acm sigir conference 23 - 27 july 2007 , amsterdam _ integrating bm25 & bm25f into lucene_. website 2008 .http:\/\/nlp.uned.es\/~jperezi\/lucene - bm25\/jar\/models.jar[http : nlp.uned.es jperezilucene-bm25jarmodels.jar ] _ integrating bm25 & bm25f into lucene - javadoc_. website 2008 .jperezi / lucene - bm25/javadoc ] doron cohen , einat amitay and david carmel _ lucene and juru at trec 2007 : 1-million queries track _ , trec 2007,http://trec.nist.gov / pubs / trec16/papers / ibm - haifa.mq.final.pdf ]
|
this document describes the bm25 and bm25f implementation using the lucene java framework . the implementation described here can be downloaded from . both models have stood out at trec by their performance and are considered as state - of - the - art in the ir community . bm25 is applied to retrieval on plain text documents , that is for documents that do not contain fields , while bm25f is applied to documents with structure .
|
population - wide information cascades are rare events , initially triggered by a single seed or a small number of initiators , in which rumors , fads or political positions are adopted by a large fraction of an informed community . in recent years, some theoretical approaches have explored the topological conditions under which system - wide avalanches are possible ; whereas others have proposed threshold , rumor- or epidemic - like dynamics to model such phenomena . beyond these efforts , digitally - mediated communication in the era of the web 2.0 has enabled researchers to peek into actual information cascades arising in a variety of platforms blogs and online social networks ( osns ) mainly , but not exclusively .notably , these latter empirical works deal with a wide variety of situations .first , the online platforms under analyses are not the same .indeed , we find research focused on distinct social networks such as facebook , twitter , flickr , digg or the blogospehere which build in several types of user - user interactions to satisfy the need for different levels of engagement between users . as a consequence , although scholars make use of a mostly common terminology ( `` seed '' , `` diffusion tree '' , `` adopter '' , etc . ) and most analyses are based on similar descriptors ( size distributions , identification of influential nodes , etc ) , their operationalization of a cascade i.e ., how a cascade is defined largely varies .this fact is perfectly coherent , because how information flows differs from one context to another .furthermore , even _ within _ the same osn different definitions may be found ( compare for instance and ) .such myriad of possibilities is not necessarily controversial : it merely reflects a rich , complex phenomenology . andyet it places weighty constraints when it comes to generalize some results .the study of information cascades easily evokes that of influence diffusion patterns , which in turn has obvious practical relevance in terms of enhancing the reach of a message ( i.e. marketing ) or for prevention and preparedness . in these applicationsa unique definition would be highly desirable , as proposed in classical communication theory .on the other hand , the profusion of descriptions and the plurality of collective attention patterns hinder some further work aimed to confirm , extend and seek commonalities among previous findings . in this work we capitalize on a type of cascade definition which pivots on time constraints rather than `` content chains '' .despite the aforementioned heterogeneity , all but one empirical works on cascades revolve exclusively around information forwarding : the basic criterion to include a node in a diffusion tree is to guarantee that ( a ) the node sends out a piece of information at time ; ( b ) such piece of information was received from a friend who had previously sent it out , at time ; and finally ( c ) and became friends at , before received the piece of information ( the notion of `` friend '' changes from osn to osn , and must be understood broadly here ) .note that no strict time restriction exists besides the fact that , the emphasis is placed on whether the _ same _ content is flowing .this work instead turns to topic - specific data in which it is safely assumed that content is similar , and the inclusion in a cascade depends not on the repetition of a message but rather on the engagement in a `` conversation '' about a matter . beyond our conceptualization of a cascade , this work seeks first to test the robustness of previous findings in different social contexts , and then moves on towards a better understanding of how deep and fast do cascades grow .the former implies reproducing some general outcomes regarding cascade size distributions , and how such cascades scale as a function of the initial node s position in the network .the latter aims at digging into cascades , to obtain information about their temporal and topological hidden patterns .this effort includes questions such as the duration and depth of cascades , or the relation between community structure and cascade s outreach .our methodology allows to prove the existence of a subtle class of reputed nodes , which we identify as `` hidden influentials '' after , who have a major role when it comes to spawn system - wide phenomena .our data comprises a set of messages publicly exchanged through _ www.twitter.com _ from the of march , 2011 , to the of march , 2012 .the whole sample of messages was filtered by the spanish start - up company _ cierzo development _ , restricting them to those that contained at least one of 20 preselected hashtags ( see table 1 ) .these hashtags correspond to distinct topics , thus we obtained different subsets to which we assign a generic tag .we present the results for two of these subsets .one sample consists of 1,188,946 tweets and is related to the spanish grassroots movement popularly known as `` 15 m '' , after the events on the 15th of may , 2011 .this movement has however endured over time , and in this work we will refer to it as _grassroots_. messages were generated by 115,459 unique users .on the other hand , 606,645 filtered tweets referred to the topic `` spanish elections '' , which were celebrated on the third week of november , 2011 .this sample was generated by 84,386 unique users . using the twitter api we queried for the list of followers for each of the users , discarding those who did not show outgoing activity during the period under consideration . in this way , for each data set , we obtain an unweighted directed network in which each node represents an active user ( regarding a particular topic ) . a link from user to user established if follows . therefore , out - degree ( ) represents the number of followers a node has , whereas in - degree ( ) stands for its number of friends , i.e. , the number of users it follows .the link direction reflects the fact that a tweet posted by is ( instantaneously ) received by , indicating the direction in which information flows .although the set of links may vary in the scale of months we take the network structure as completely static , considering the topology at the moment of the scrap .twitter is most often _ exclusively _ defined as a microblogging service , emphasizing its broadcasting nature .such definition overlooks however other facets , such as the use of twitter to interact with others , in terms of _ conversations _ or _collaboration _ , for instance connecting groups of people in critical situations ._ addressivity _ accentuates these alternative features .moreover , observed patterns of link ( follower relation ) reciprocity ( see table 2 ) hint further the use of twitter as an instant messaging system , in which different pieces of information around a topic may be circulating ( typically over short time spans ) in many - to - many interactions , along direct or indirect information pathways .it is precisely in this type of interactions where the definition of a time - constrained cascade is a useful tool to uncover how and how often users get involved in sequential message interchange , in which the strict repetition of contents is not necessary ( possibly not even frequent ) .a time - constrained cascade , starting at a _ seed _ at time , occurs whenever some of those who `` hear '' the piece of information react to it including replying or forwarding it within a prescribed time frame $ ] , thereby becoming _spreaders_. the cascade can live further if , in turn , reactions show up in , , and so on . since messages in twitter are instantly broadcasted to the set of users following the source , we define listener cascades as those including both active ( spreader ) and passive participants . in consideringso we account for the upper bound of awareness over a certain conversation in the whole population ( see figure [ fig1 ] for illustration ) .admittedly , our conceptualization does not control for exogenous factors which may be occurring at the onset and during cascades .this being so , ours is a comprehensive account of information cascades . to that follows , as any tweet posted by is automatically received by .red nodes are those who posted a new message at the corresponding time , whereas gray nodes only _ listened _ to their friends .in this particular example , user acts as the initial seed , emitting a message at time which is instantaneously sent to its nearest neighbors , laying on the first dashed circle , who are counted as part of the cascade . some of them ( nodes , , and ) decide to participate at the following time step , , posting a new message and becoming intermediate spreaders of the cascade . if any of their followers show activity at the process continues and the cascade grows in size as new users listen to the message .the process finally ends when no additional users showed activity ( as it happens in the cases of users and ) , or when an intermediate spreader does not have any followers ( users , and ) . ]we apply the latter definition to explore the occurrence of listener cascades in the `` grassroots '' and `` elections '' data . in practice , we take a seed message posted by at time and include all of followers in the diffusion tree hanging from .we then check whether any of these listeners showed some activity at time , increasing the depth of the tree .this is done recursively , the tree s growth ends when no other follower shows activity .passive listeners constitute the set of leaves in the tree . in our scheme, a node can only belong to one cascade ( but could participate in it multiple times ) ; the mentioned restriction may introduce measurement biases .namely , two nodes sharing a follower may show simultaneous activity , but their follower can only be counted in one or the other cascade ( with possible consequences regarding cascade size distributions or depth in the diffusion tree ) . to minimize this degeneration, we perform calculations for many possible cascade configurations , randomizing the way we process data . in the next sections we report some results for the aforementioned data subsets ( `` grassroots '' , `` elections '' ) considering all their time span ( over one year ) .our results have been obtained for hours .previous works acknowledge the robustness of cascade statistics for ; also , a 24-hour window may be regarded as an inclusive bound of the popularity of a piece of information over time on different osns , including twitter . the identification of modules in complex networks has attracted much attention of the scientific community in the last years , and social networks posit a prominent example .a modular view of a network offers a coarse - grained perspective in which nodes are classified in subsets on the basis of their topological position and , in particular , the density of connections between and within groups . in osns, this classification usually overlaps with node attribute data , like gender , geographical proximity or ideology . to detect statistically significant clusters we rely on the concept of modularity : where is the number of links in the network ; is 1 if there is a link from node to and 0 otherwise ; is the connectivity ( degree ) of node ; and finally the kronecker delta function takes the value 1 if nodes and are classified in the same community and 0 otherwise . summarizing, quantifies how far a certain partition is from a random counterpart ( null model ). from the definition of , algorithms and heuristics to optimize modularity have appeared ever faster and with an increased degree of accuracy .all these efforts have led to a considerable success regarding the quality of detected community structure in networks , and thus a more complete topological knowledge at this level has been attained . in this work we present results for communities detected from the walktrap method in which a fair balance between accuracy and efficiency is sought .the algorithm exploits random walk dynamics . the basic idea behind them is that a random walker tends to get trapped in densely connected parts of the graph , which correspond to communities .pons and latapy s proposal is particularly efficient because , as is increasingly optimized , vertices are merged into a coarse - grained structure , reducing the computational cost of the dynamics .the resulting clusters at each stage of the algorithm are aggregated , and the process is repeated iteratively .one of the most useful applications of community analyses is a better understanding of the position of a node . in terms of information diffusion and much like in we explore whether community structure ( and in particular , the relation of a seed node with the module it belongs to ) has an impact on a cascade s success .to do so we adopt the node descriptors proposed by guimer __ in : the of the internal degree of each node in its module , and the participation coefficient of a node ( ) defined as how the node is positioned in its own module and with respect to other modules .the _ within - module degree _ and the _ participation coefficient _ are easily computed once the modules of a network are known . if is the number of links of node to other nodes in its module , is the average of over all the nodes in , and is the standard deviation of in , then is the so - called _ z - score_. the participation coefficient of node is defined as : where is the number of links of node to nodes in module , and is the total degree of node .note that the participation coefficient has a maximum at , when the s links are uniformly distributed among all the modules ( ) , while it is 0 if all its links are within its own module .those nodes that deviate largely from average internal connectivity are local hubs , whereas large values of stands for connector nodes bridging together different modules .as a starting point , we test the robustness of the results partially presented in , and further explored in .results shown in figure [ fig2 ] confirm these findings .the upper panels show that the size of time - constrained cascades are distributed in a highly heterogeneous manner , with only a small fraction of all cascades reaching system - wide proportions .this is also in good agreement with most preceding works , that have also found that large cascades occur only rarely . on the other hand , when cascades are grouped together such that the reported size corresponds to an average over topological classes , we find that both the degree ( middle panels ) and coreness ( -core , lower panels ) of nodes correlate positively with cascades sizes .some theoretical approaches predict similar behavior . .lower panels : spreading capability grouped by the -core of the initial spreader . for the sake of comparison , and -corehave been rescaled by their corresponding maxima . ]the next step to gain insights into the general overview above is to characterize how deep both temporally and structurally a cascade unfolds .we defined the _topological penetration _, of a cascade as the shortest path between the seed of the cascade and the farthest node involved in the cascade .the results shown in figure [ fig3 ] ( upper and middle panels ) give quantitative support to a well - known fact : most cascades actually die with one single spreader ( instantaneous cascades ) , which corresponds to a shallow tree though it may be quite wide . in this most frequent case ,the cascade of listeners simply accounts for the out - degree of the seed node .additionally , the bulk of cascades penetrates up to or , both for `` grassroots '' and `` elections '' , which is in the range of the average path length , but fairly below the upper bound , which is set by the network s diameter ( 10 and 9 respectively ; see table 2 ) .interestingly , as shown in the figure , when a cascade moves beyond the average path length between the initial node and any node on the network , namely , to nodes distant , a large fraction of the population will likely be engaged in a cascade that will reach system - wide sizes with high probability .is the largest shortest path length between the initial seed and any node involved in the same cascade , where as refers to the cascades lifetimes .middle panels : box - plots for topological penetration .lower panels : box - plots for temporal penetration .cascades spreading success grows with time , and some exceptional conversations can last for months ( note the broken axis ) . ]temporal patterns , as given by the lifetime of a cascade , follow similar trends : most cascades die out after 24 hours , which closely resembles previously reported results . however , in figure [ fig3 ] ( upper panels ) we observe a richer distribution ( compared to topological penetration ) such that cascades may last over 100 days , suggesting that the survival of a conversation does not exhibit an obvious pattern .again , this result confirms from a different point of view empirical results published elsewhere .finally , temporal penetration illustrates the fact that a node may participate multiple times in a single cascade although it is counted just once .this is implicit in the definition of a time - constrained cascade , placing it closer to neuronal dynamics and spike - trains which comprehend self - sustained activity and deviating it from classical modeling approaches such as rumor spreading dynamics , where multiple exposures to the rumor end up in ceasing its dissemination . in any case ,figure [ fig3 ] ( lower panels ) illustrates that survival can not guarantee system - wide cascades , although an increasing pattern is observed as survival time grows . up to now we have related a cascade s size to certain features of the seed node .although we observe a clear positively correlated pattern ( the larger the seed s descriptor , the larger the resulting cascade ) , one might fairly argue that a wide range of values below the maximum produces a similar outcome .so , for instance , seeds in the range ( figure [ fig2 ] ) can sometimes trigger large cascades ; the same can be said for .this finding prompts us to hypothesize that the success of an activity cascade might greatly depend on intermediate spreaders characteristics , and not only on the properties of the seed nodes .that being so , a large seed ( i.e. its follower set ) may be a sufficient but not a necessary condition for the generation of large - scale cascades . in this sectionwe explore how the average connectivity of the train of spreaders involved in a cascade affects its final size . .non - instantaneous cascades are displayed , where the initial seed and its inactive listeners have been removed in order to dismiss the effect of the initial seed on the cascade size .there is a clear correlation between both magnitudes , although some unexpected behavior shows up : the existence of cascades containing `` hidden spreaders '' , users who are capable of generating large cascades despite not having hub - like connectivity . in both panelsthe function is drawn as a reference . ] to study the role of intermediate spreaders we split our results , distinguishing instantaneous cascades ( those with a unique spreader ) from those with multiple spreaders .the former merely underlines the fact that the seed s suffices to observe large cascades .the latter , more interestingly , unveils a new character in the play : _ hidden influentials _, relatively smaller ( in terms of connectivity ) nodes who , on the aggregate , can make chain reactions turn into global cascades .figure [ fig4 ] reveals these special users : note that the largest effects are obtained for those spreaders who , on average , have to neighbors ( both for `` grassroots '' and `` elections '' ) .these nodes do not occupy key topological positions that would _ a priori _ identify them as influential , and yet they play a major role promoting system - wide events . therefore , getting these nodes involved has a multiplicative impact on the size of the cascades . to quantify such effect, we introduce the _ multiplicative number _ of a given node , ( in analogy with the basic reproductive number in disease spreading ) , which is the quotient of the number of listeners reached one time step after showed activity , , and the number of s nearest listeners , i.e. , those who instantaneously received its message , ( which is given by the number of followers of that are involved in the cascade ) .thus , the ratio measures the multiplicative capacity of a node : indicates that a user has been able to increase the number of listeners who received the message beyond its immediate followers .figure [ fig5 ] shows how is distributed as a function of .top panels represent the proportion of nodes with and per degree class . in this case, normalization takes into account all possible and all ( above and below 1 ) counts , so as to evidence that in most cases cascades become progressively shrunk as they advance .the fact that the area corresponding to the region is much smaller than that for tells us that most cascades are small , which is consistent with the reported cascades size distribution .on the other hand , bottom panels in figure [ fig5 ] focus on the same quantity , but in this case we represent the probability ( ) that a node of out - degree has ( does not have ) a multiplicative effect .as before , the results indicate that , in both datasets , the most - efficient spreaders ( those with a multiplicative number larger than one ) can be found most often in the degree classes ranging from to , i.e. , significantly below ( see table 2 ) .these nodes are the actual responsible that cascades go global and must be engaged if one would like to increase the likelihood of generating system - wide cascades . for both datasets as indicated . as one can see , most of the time , which reflects the rare occurrence of large cascades .the bottom panels instead represent the probabilities , ( above 0 baseline ) and ( below it ) that a node of out - degree has or does not have a multiplicative effect , respectively .see the text for further details . ]the previous features of hidden influentials poses some doubts about what is the actual role of hubs in cascades that are not initiated by them .interestingly , we next provide quantitative evidences that , in contrast to what is commonly assumed , hubs often act as cascade firewalls rather than spawners . to this endwe have measured ( average nearest neighbors degree ) with respect to seed nodes . each point in figure [ fig6 ] represents the relationship between cascade size and .the initial trend is clear and expected : the larger is the average degree of the seed s neighbors , the deeper the tree grows .however , at some point this pattern changes and indicate that cascades may die out when they encounter a hub , more often than not .if this were not the case , one would observe a monotonically increasing dependence with .this counterintuitive hub - effect is mirrored in classical rumor dynamics and can be explained scrutinizing the typically low activity patterns of these ( topologically ) special nodes .( average nearest neighbors degree ) : remarkably , nodes with the highest connectivity do not enhance , but rather diminish , cascades growth . as in the previous figure ,largest cascades are obtained when second spreaders ( the seed s neighbors ) have , on average , . ]it is generally accepted that cohesive sub - structures play an important role for the functioning of complex systems , because topologically dense clusters impose restrictions to dynamical processes running on top of the structure . for example , in the context of osns , detected communities in _ _ twitter networks were found to encode both geographical and political information , suggesting that a large fraction of interactions take place locally , but lots of them also correspond to global modules for instance , users rely on mass media accounts to amplify their opinion . focusing on information diffusion , inter- and intra - modular connections in osns have already been explored regarding the nature of user - user ties .we instead investigate other questions , such as : ( i ) are modules actual bottlenecks for information diffusion ? ; ( ii ) is the spreading of information more successful for `` kinless '' nodes ( those who have links in many communities besides their own one ) ? or( iii ) do local hubs those with larger - than - expected intra - modular connectivity have higher chances to trigger system - wide cascades ? ) in a cascade that unfolds in the same community of the initial seed and the size of the cascade itself ( ) .proportions have been normalized column - wise , i.e. by the total number of cascades with same size .note that cascades affecting up to nodes mostly lie close to the diagonal , i.e. a vast majority of the cascade occurs within the community where it began . at a certain point ( beyond )cascades spill over the module where they began . ]we apply the community analyses described in section 3.2 and obtain a network partition in and modules , for the `` grassroots '' and `` elections '' data sets respectively , with optimized values and maximum module size given in table 2 .next , for each cascade we compute how many nodes in the resulting diffusion tree belong to the same cluster of the seed ( ) .this allows to get , as shown in figure [ fig7 ] , how often a cascade spills over the module where it began .interestingly , small to medium - sized cascades ( ) mainly diffuse within the same community where they were prompted , which hints at the fact that influence occurs within specialized topics .note however that our approach to community analysis is blind to contents and relies solely on the underlying topology , thus we can only make an educated guess regarding whether modules cluster users around a certain topic ( i.e. assuming _ homophily _remarkably , our results match qualitatively at least the predicted behavior in regarding cascades in correlated and modular networks . turning to the individual level , the results depicted in the plane of figure [ fig8 ] confirm the importance of connectivity in this case , within - module leadership to succeed when a cascade is triggered . indeed ,most nodes for which elicit large cascades in both samples .however , and most interestingly , it suggests that connector or kinless ( ) nodes can perform better than expected at precipitating system - wide cascades just by paying attention to internal connectivity . as shown in the figure , nodes with a z - score between 0 and 1 acting as connectors are still able to generate system - wide cascades because they compensate their relative lack of connectivity by bridging different modules .this feature is specially noticeable in the case of the `` election '' dataset ( right panel ) .all in all , our results establish that topological modules represent indeed dynamical bottlenecks , which need to be bypassed through high but also low connectivity users to let a cascade go global .planes to assess whether the modular structure of the following / friend network places dynamical constraints in the growth of cascades .a first clear result is that when local leaders ( ) precipitate information cascades , these tend to be more successful .more interestingly , connector nodes ( ) also succeed quite often , suggesting that a node s position in the mesoscale can sometimes play a more important role than a rich connectivity . ]in just one decade social networking sites have revolutionized human communication routines , placing solid foundations to the advent of the web 2.0 .the academia has not ignored such eruption , some researchers foreseeing a myriad of applications ranging from e - commerce to cooperative platforms ; while others soon intuited that osns could represent a unique opportunity to bring empirical evidence at large into open sociological problems .information cascades fall somewhere in between , both attracting the interest of viral marketing experts who worry about optimal outreach and costs and collective social action and political scientists concerned about grassroots movements , opinion contagion , etc .however , the diversity of osns which constrains the format and the way information flows between users and the complexity of human communication patterns heterogeneous activity , different classes of collective attention have resulted in a multiplicity of empirical approaches to cascading phenomena let alone theoretical works .while all of them highlight different interesting aspects of information dissemination , little has been done to confirm results testing its robustness across different social platforms and social contexts . in this regard, the present work capitalizes on previous research to collect , in new large datasets , the statistics of time - constrained information cascades .this scheme exploits the concept of spike train from neuroscience , i.e. , a series of discrete action potentials from a neuron . in the brain ,two regions are classified as functionally related if they show activity within the same time window .consequently , message chains are reconstructed assuming that conversation - like activity is contagious if it takes place in relatively short time windows .the main preceding observed trends are here reproduced successfully .furthermore , we extend the study to uncover other internal facets of these cascades .first , we have discussed how long in time and how deep in the topology cascades go , to realize that , as in neuronal activity , time - constrained cascades can exhibit self - sustained activity .we have then paid attention to those nodes who , beyond the seeds initial onset , actively participate in the cascade .our main results point to two counterintuitive facts , by which hubs can short - circuit information pathways and average users hidden influentials spawn system - wide events .we have found that for a cascade to be successful in terms of the number of users involved in it , key nodes should be engaged .these nodes are not the hubs , which more than often behave as firewalls , but a middle class that either have a high multiplicative capacity or act as bridges between the modules that make up the system .presumably , modular topologies abundant in the real world entail the presence of information bottlenecks ( poor inter - modular connectivity ) which place constraints to efficient diffusion dynamics . indeed , we find that medium - sized and small cascades ( the most frequent ones ) happen mainly within the community where a cascade sprung .furthermore , those seed nodes which happen to be poorly classified ( they participate in many modules besides their own ) are more successful at triggering large cascades .a better understanding of time - constrained cascading behavior in complex systems leads to new questions .first , it seems clear that the bulk of theoretical work devoted to information spreading is not meant to model this conversation - type dynamics it is rather focused on rumor and epidemic models .other approaches need to be sought to fill such gap .also , time - constrained cascades have always been studied in the context of political discussion and mobilization . as such ,this is a fairly limited view of what happens in a service with ( as of late 2012 ) over 200 million active users .results like the ones obtained here will anyhow provide new hints for a better understanding of social phenomena that are mediated by new communication platforms and for the development of novel manmade algorithms for effective and costless dissemination ( viral ) dynamics .we thank dr . a rivero for helping us to collect and process the data used in this paper .this work has been partially supported by mineco through grant fis2011 - 25167 ; comunidad de aragn ( spain ) through a grant to the group fenol and by the ec fet - proactive project plexmath ( grant 317614 ) .the authors declare that they have no competing interests .rab , jbh and ym conceived the experiments .rab and jbh performed the analysis . all authors wrote and approved the final version of the manuscript .both `` grassroots '' and `` elections '' data sets were collected filtering twitter traffic according to related keywords , which are listed in this table . for each keywordwe display the number of hashtags found ( keywords preceded by # ) , the number of mentions ( keywords preceded by @ ) and the number of words ( keywords with no preceding symbol ) . number of vertices , and number of edges .wcc stands for the size of the weakly connected component ; scc is the size of the strongly connected component .next we report the maximum degree and core values for the undirected network ( ) , network of friends ( ) , and network of followers ( ) .average shortest path and diameter ( the largest shortest path in the network ) provide some hints about how deep in the structure can a cascade travel .reciprocity is a type of correlation expressing the tendency of vertex pairs to form mutual connections .notably , results for the datasets in this works are higher than those for social networks in , and are actually comparable to reciprocity in neural networks . in our context , it reinforces the idea that twitter may be used _ both _ as a microblogging system and a message interchange service . community detection parameters . louvain algorithm ( l ) and radatools ( rt ) with an extremal optimization heuristic ( e ) and fast - algorithm ( f ) have been used for comparison . stands for the best modularity found and for the number of communities detected .the quotient of the largest community s size and the network size , , is also shown .
|
in a diversified context with multiple social networking sites , heterogeneous activity patterns and different user - user relations , the concept of `` information cascade '' is all but univocal . despite the fact that such information cascades can be defined in different ways , it is important to check whether some of the observed patterns are common to diverse contagion processes that take place on modern social media . here , we explore one type of information cascades , namely , those that are time - constrained , related to two kinds of socially - rooted topics on twitter . specifically , we show that in both cases cascades sizes distribute following a fat tailed distribution and that whether or not a cascade reaches system - wide proportions is mainly given by the presence of so - called hidden influentials . these latter nodes are not the hubs , which on the contrary , often act as firewalls for information spreading . our results are important for a better understanding of the dynamics of complex contagion and , from a practical side , for the identification of efficient spreaders in viral phenomena . [ 1995/12/01 ]
|
of the prevailing dilemmas operators are facing today is how to persuade residential users to take up faster broadband offers in areas where satisfactory speed ( for example of the order of fifty to a hundred megabit per second ) is already available at competitive price .blame is typically given to the lack of marketable applications requiring speed of hundreds of megabit per second .we argue this perception derives from an incorrect approach to the problem , and it is the cause of an un - converged vision of the network which is considered as a chain of separate sections only inter - connected at the ip protocol level . in this paperwe envision a unified view of the network that is achieved by scaling up network convergence to a point where the network can provide personalised connectivity to the application using it independently of the underlying technology , geographical location and infrastructure ownership .while a number of research projects have emphasised the economic benefit of physical convergence of access and metro networks ,, we argue this new vision requires dealing with network convergence at a multi - dimensional level and focus on the following three dimensions : the spatial dimension dimension , the networking dimension and the ownership dimension .we also come to the conclusion that current business models based on broadband offering of peak bit rates does not have a place in the future , as the value for the end users lies in the correct delivery of the application , which should become the starting point of the value - chain .if we look at the past two decades we see that network convergence has driven the reduction in cost of ownership by moving voice services from the synchronous tdm transmission systems ( e.g. , sonet and sdh ) to the packet switched architecture used to transport internet data , through the adoption of voice over ip ( voip ) technology .this technological convergence also gave the operators the opportunity to offer bundled broadband services , such as triple play ( e.g. , voice , internet and tv ) and quadruple play ( adding mobile phone to the mix ) , as a means to reduce their cost and to benefit from economies of scales associated to service consolidation .this trend has evolved over the years , moving today its focus on the convergence of access and metro networks , which revolves around two key trends of infrastructure integration .the first is the consolidation of the number of central offices ( cos ) , which is typically achieved by adopting fibre access architectures with longer reach , to bypass some of the current network nodes .the second is the convergence of wireless and wireline networks , which typically focuses on the transport of data from mobile stations over shared optical access links .this paper argues that while this access - metro infrastructure integration is a step in the right direction , it only represents part of the contribution that network convergence can provide in support of the application - centric network vision necessary to deliver future 5 g services .the paper is structured as follows .the next section provides an overview of the requirement of future 5 g networks , based on the early work of industry fora .section iii gives a brief insight on the work carried out by some of the main standardisation bodies on 5 g .while this short overview is far from being comprehensive , due to the large number of ongoing standardisation activities , it provides an insight on the main technologies being considered on wireless , optical and higher network layer technology .after this , we investigate a number of research activities on converged network architectures that aim at providing an integrated framework to bring together different 5 g technologies within a unifying and coherent network ecosystem .we report these activities under three distinct categories of convergence .the convergence in the space dimension , in section v , explores the use of long - reach access technology to enable central office consolidation .the convergence in the networking dimension , in section vi , discusses trends and options for end - to - end integration of different network types , focusing on fixed / mobile convergence and proposing a vision where agile data centres play a pivotal role in the virtualisation of network functions .the convergence in the ownership domain , in section vii , describes current work in the area of multi - tenancy for access network .section viii provides final remarks and discussions giving some insight on the role that sdn will play towards converged 5 g networks and proposing the use of application - driven business models as a foundation of the 5 g vision . finally , section ix concludes the paper summarising the main key points and providing some insight on future research areas .one of the main targets for operators in the design of next generation network architectures is to plan for an infrastructure that if 5g - ready , i.e. , capable to support the next generation of applications and services .the next generation mobile networks ( ngmn ) alliance was one of the first fora to come up with a set of use cases , business models , technology and architecture proposition for 5 g .the ngmn alliance envisages the existence of three large application groups : 1 .enhanced mobile broadband ( embb ) , which aims at scaling up broadband capacity to deliver next generation of ultra high definition and fidelity video service with augmented reality . from a networking perspectivethe target is to provide every active user with a least 50 mbps everywhere , with enhanced targets of 300 mbps in dense areas and up to 1 gbps indoor .massive machine type communications ( mmtc ) , aiming at scaling up the network to support tens of billions connected devices , with densities of up to 200,000 units per square km . the target is also to simplify the devices compared to 3 g and 4 g to enable ultra - low cost and low power consumption .mmtc is seen as a major enabler for the internet of things ( iot ) , which is the end - to - end ecosystem running on top of the machine - to - machine communication service .3 . ultra reliable and low latency communication ( urllc ) , aiming at decreasing end - to - end latency to below 1 ms and reliability levels above five nines .it is envisaged that such requirements will be crucial for applications such as automotive ( e.g. , inter - vehicle communication systems for accident avoidance ) and medical ( e.g. , remote surgery ) , but could be extended to other types of tactile internet applications , which require ultra low latency user feedback mechanism .figure [ fig : key - capabilities ] ( derived form ) shows a summary of the increase in key capabilities of the network as the technology moves from imt - advanced ( in red shade ) towards imt-2020 ( in green shades ) .it should be noticed that although , collectively , imt-2020 is expected to reach the most stringent requirements shown in the figure , no one application is expected to require them all simultaneously .indeed the figure shows that such requirement can be further categorised according to the three application categories above .for example it is envisaged that embb will benefit from most of the enhancements , but it is not expected to require sub - ms latency ( such as urllc applications ) or density of devices above per ( unlike mmtc applications requiring a density of up to per ) .embb and erllc applications are in particular those that are expected to generate novel revenue streams for operators , especially from vertical markets , as it is believed that 5 g network infrastructures will enable the digitalisation of society and economy , leading to the fourth industrial revolution , impacting multiple sectors , especially the automotive , transportation , healthcare , energy , manufacturing and media and entertainment sectors . finally , an aspect of 5 g networks of increasing importance is the ubiquity of broadband connection , as it is expected that the user experience continuity is not confined to urban districts but also extended to rural areas .the digital divide has become a major social and political issue worldwide , as fast broadband connectivity is now a commodity , and like drinkable water a right of every citizen .the european commission has clearly stated broadband speed targets for the year 2020 of 100% population coverage with at least 30mbps , and 50% with at least 100 mbps .these requirements put additional pressure on developing access broadband architectures that are capable of reducing connectivity costs in rural areas , and are compliant with open - access models , which are necessary to operate in remote areas that require state intervention .while much of the 5 g architecture and technologies are still undefined and under discussion , the diverse range of capabilities expected , often with conflicting goals , makes it clear that * _ 5 g is not simply the next - generation of 4 g _ * , or confined to the development of new radio interfaces , * _ but rather encompasses the development of an end - to - end system including multiple network domains , both fixed and mobile_*. indeed we ll see in the next section that there are different technologies and standardisation bodies involved in the definition of the 5 g ecosystem . in order to provide a level of insight on how to design a 5g - ready network ,while not providing specific guidelines , the ngmn alliance has attempted a definition of two general design principles : * provide expanded network capabilities : with the idea of pushing the network performance and capabilities over multiple directions , as exemplified in figure [ fig : key - capabilities ] .* design intelligent `` poly - morphic '' systems : provide a malleable system that can be tailored to the application required .it is envisaged that virtualisation of network functions , control and data plane will play a pivotal role in enabling network flexibility .the envisaged flexible network design implies flexibility in assigning network resources to applications , with diverse quality of service , reliability and availability requirements and will be reflected in updated business and charging models to drive the economics of the 5 g ecosystem . a confirmation of this necessity is given by the difficulty current operators have to convince end users to buy ultra - high speed fibre access broadband where there are already other lower cost options offering satisfactory broadband speed .it is indeed increasingly difficult for end user to understand the practical benefit of faster access speeds , and justify its higher cost , when this does not assure the correct end - to - end delivery of services . in the current model the user pays for network connectivity , typically a flat rate , in addition to charges for the use of some applications , which however do not have the ability to view or influence the underlying end - to - end network performance .this model does not reflect the fact that the real value for the user is in the applications , which have diverse capacity and latency requirements , and whose * _ value per bit _ * can vary substantially ( for example if we compare a video on demand to a voice call or a medical device remotely sending body monitoring values to a medical centre ) .clearly this model needs to shift towards one that instead is linked to the application used , and makes the end user completely oblivious of the details of the underlying network performance . a practical ,although basic , example of the implementation of this model is a recent deal where netflix has agreed to pay comcast for an improved delivery of its service over their network .is the reliable service delivery through end - to - end quality of service ( qos ) assurance , not the isolated increase in access data rate , that will generate new forms of revenue .if the operators can not deliver this type of service , it is likely that otts and industry verticals will continue to build their own dedicated network , which will lead to a fragmented suboptimal network development .although the basic mechanism for qos have been standardised many years ago through integrated services ( intserv ) and differentiated services ( diffserv ) , attempts at using them in practice for end - to - end quality assurance have failed in the past .this was due to the complexity of implementing it across different domains at the granularity of the individual application and over - provisioning was used instead as a means to provide an average acceptable service .we argue that today the situation has changed for the following reasons and that qos assurance will play a fundamental role in future networks .firstly , operators have now started to look for higher efficiency in their network as their profit have constantly declined , and massive overprovisioning ceases to be a valuable option .secondly , the internet architecture has migrated over the years from a hierarchical model , where end - to - end connectivity was provided by passing through multiple tiers , and crossing several network domains , to a model where many peering connections provide direct links between providers .this reduces substantially the number of domains crossed by the data flows which represented one of the main obstacles to qos delivery .the reduction in average number of ip hops was also achieved through widespread installation of new data centres and the use of content delivery networks ( cdn ) to reduce the distance between traffic source and destination .indeed an analysis of current metro vs. long - haul traffic shows that the latter is in constant decrease ( see fig .[ fig : metro_lh ] ) , suggesting that most end - to - end connections are confined within the metro area .we argue that this change in circumstances will enable the success of large - scale end - to - end qos delivery .this will be facilitated by the development of open and programmable networks , based on the software defined networking ( sdn ) paradigm , which will automate most of the qos configuration hiding its complexity to network administrators .indeed many operators are already investigating the use of sdn in their network , with some making it their main short - term goal , and discussions have already started among industry fora for delivery of broadband assured services .while the requirements and use cases are still work in progress and are progressively refined as discussions on 5 g carry on , standardisation bodies have started building up a roadmap to evolve the current technology towards 5 g .this section reports on some of the most relevant standardisation activities , considering both fixed and mobile networks , that contribute at different levels towards the realisation of the overall 5 g vision . herewe report some highlights of the standardisation roadmap for major bodies like the itu , ieee , etsi and onf , which are summarised in figure [ fig : standard_roadmap ] .having standardised most of the previous generations of mobile communications , one of the most active bodies in the standardisation of radio aspects of 5 g is the itu radiocommunication sector ( itu - r ) , through their international mobile telecommunication imt-2020 programme .releases 13 and 14 ( recognised as part of the lte - advanced framework ) are setting up the basis towards the required 5 g enhancements , considering technologies such as full - dimension multiple input multiple output systems ( fd - mimo ) with 2d arrays of up to 64 antennas , licensed - assisted access ( laa ) enabling the joint usage of unlicensed and licensed spectrum , and enhanced carried aggregation to increase the number of carries from 5 to 32 .however it is expected that the full imt-2020 specification will be provided with release 16 ( with an expected release date around the year 2020 ) , where new air interface for operations above 6ghz will be developed in addition to evolutions that are backward compatible with lte ( i.e. on radio frequencies below 6ghz ) .imt-2020 is also targeting the vehicle - to - x ( v2x ) type of use cases , focusing specifically on vehicle - to - vehicle ( v2v ) , vehicle - to - pedestrian ( v2p ) and vehicle - to - infrastructure / network ( v2i / n ) .these efforts include studies on enhancements to resource allocation mechanisms , to improve robustness , latency , overhead and capacity of the mobile communication system . from an optical access network perspective , while not explicitly considered a 5 g evolution , itu - t has recently standardised the xgs - pon , for cost - effective delivery of symmetric 10 gbps to the residential market in the shorter term , and ng - pon2 , offering up to 80 gbps symmetric rates over 8 10 g wavelength channels .the itu optical access group is currently investigating what other technologies to consider beyond ng - pon2 , with proposals spanning from amplified optical distribution network ( odn ) for longer reach to coherent transmission optical access . considering yearly traffic growth rates of about 30% ( for most developed countries ) , it is expected that ng - pon2 , although only exploiting a small portion of the available optical spectrum , will be able to deliver the capacity that residential users might need for the foreseeable future , and that future generation of standards should focus on aspects of cost reduction and network flexibility , for example to support mobile transport , business and residential types of services in the same pon infrastructure .another body considering 5 g adaptations to its standards is the ieee , with its 802.11 group working on two main activities .the 802.11ax , labeled high efficiency wi - fi , enhances the 802.11ac by aiming at throughputs of the order of the 10gb / s per access point , through a combination of the mimo and orthogonal frequency division multiplexing ( ofdm ) technologies and working on frequency bands between 1 and 6 ghz . the next generation 60ghz ( ng60 )aims instead at evolving the 802.11ad air interface to support data rates above 30 gb / s .the high frequency limits its operation to line of sight transmission , targeting applications such as wireless cable replacement , wireless backhaul and indoor short - distance communications . from an optical access perspective, ieee is also updating the 10gb / s pon interface with the new 802.3ca standardisation effort , investigating possible physical layer data rates of 25 , 50 and 100 gb / s , expected to be fully released by the end of 2019 .it is also worth mentioning the ieee effort on standard for packet - based fronthaul transport networks ( p1914.1 ) .this provides a practical solution to the fixed - mobile convergence problem , aiming to deliver an ethernet - based transport , switching and aggregation system to deliver fronthaul services .in addition to physical layer standardisation , the 5 g ecosystem requires standardisation also of higher layers .in particular network function virtualisation ( nfv ) is identified as a target for 5 g networks , to increase network programmability and reduce cost of ownership by moving network functions from proprietary hardware to software running on general purpose servers ( a concept also known as `` softwarisation '' ) .etsi has been active on nfv from its emergence , publishing the first release of its specification in december 2014 , where it provided an infrastructure overview , an architectural framework , and descriptions of the compute , hypervisor and network domains of the infrastructure .the architecture has three main constituent elements : the network function virtualisation infrastructure ( nfvi ) , which provides the commodity hardware , additional accelerator components and the software layer that abstracts and virtualises the underlying hardware ; the virtualised network function ( vnf ) , which is the software implementation of the required network function and runs on top of the nfvi ; and the nfv management and orchestration entity ( m&o ) taking care of the lifecycle management of the physical and software resources and providing an interface to external operartion support systems ( oss ) and business support systems ( bss ) for integration with existing network management frameworks . the second nvf release , covering the working period from november 2014 to mid 2016 , provided nfvi hypervisor requirements , functional requirement of management and orchestration , hardware and software acceleration mechanism for the virtual interfaces and virtual switch .it also expanded the architectural framework with the further specification of the management and orchestration entity ( dubbed mano ) with the definition of : the virtualised infrastructure managers ( vim ) , which performs orchestration and management functions of nfvi resources within a domain ; the nfv orchestrator ( nfvo ) , performing orchestration of nfvi resources across multiple domains ; and the vnf manager ( vnfm ) carrying out orchestration and management functions of the vnfs .etsi s work towards the third nfv release has only recently started and is expected to target topics such as charging , billing and accounting , policy management , vnf lifecycle management and more .the open networking forum is the reference consortium for the standardisation of the software defined networking paradigm . after the release of the openflow ( of ) v1.0 in december 2009 ,their work has progressed to define a plethora of updates , with new releases every few months ( for this reason the onf roadmap is not reported in figure [ fig : standard_roadmap ] ) .while the of v1.0 specification has evolved to v1.3.5 , other two releases have progressed in parallel to allow the development of more unconventional versions of the protocol , that was not seen as essential for all vendors .for example v1.4 allowed for a more extensible protocol definition and introduced optical port properties .version 1.5 introduced the concept of layer 4 to layer 7 processing through deep header parsing and the use of egress tables to allow processing to be done in the context of the output port . while the of specification targets the control plane functions , the management and configuration specification where carried out through the of - config protocol releases , currently at version v.1.2 .additional specification were also released to target transport networks , covering aspects of multi - layer and multi - domain sdn control , together with many technical recommendations on sdn architectural aspects .the previous two sections have provided an insight on expected 5 g network requirements and given a brief description of the roadmap pursued by some of the most relevant standardisation bodies .a first set of activities , at the lower network layers , focus on the enhancement of network performance , addressing higher cell density , higher peak rate and energy efficiency , as well as scalability , latency reduction and higher reliability .this is reflected for example by the work carried out by itu - r , itu - t , and ieee cited above .a second set of activities , at the higher network layers , are targeting software - driven approaches to resource virtualisation and control , through network virtualisation , nfv and sdn control layers , which is strongly driven by onf and etsi .this is believed to be a distinctive feature of 5 g networks , to satisfy the poly - morphic design principle and provide the flexibility and automation required to accommodate the envisaged diversity of requirements . in addition , it is envisaged that a third set of activities should focus on new business and network ownership models that will have to emerge to make the integration of all the various components profitable for the market players .the ngmn alliance for example recognises that the creation of a valid and all - encompassing business case for 5 g is pivotal for the sustainability of the entire 5 g ecosystem .while standards are taking care of some of the potential emerging technologies they only operate within specific technologies and typically do not provide the end - to - end view that is required for the 5 g vision . in this tutorial we complement the analysis on requirement and standardisation activities on 5 g with an overview of a number of recent research projects and activities on network architectures aiming towards a unified end - to - end view of the network .we propose to categorise such activities across three complementary dimensions of network convergence , summarised in figure [ fig : convergence_all ] , which can contribute the overall architectural framework to bring together different 5 g technologies within a unifying and coherent network ecosystem .the convergence in the space domain signifies the consolidation of many of the current network nodes : the the top left of figure [ fig : convergence_all ] shows a node consolidation analysis for the uk that could reduce the number of total nodes from over 5,000 to less than 100 ; the convergence in the networking dimension , at the bottom of the figure , designates the integration of heterogeneous infrastructure across different network segments to allow end - to - end control of networking resources ; the convergence in the ownership domain , in the top right hand side of the figure , denotes the concept of multi - tenancy across network resources , allowing multiple operators to share physical network infrastructure .overall , while the network can benefit from all three types of convergence , some of them have contrasting requirements , for example considering the trade - off between node consolidation , requiring longer links with higher latency , and support for some of next generation mobile services , which present today very tight latency constraints .we anticipate that some of these issues have not yet been resolved and should be addressed in future research projects .the next three sections of this paper provide an in - depth analysis of these network convergence topics . at the end of each sectionwe summarise how the research activities described contribute towards the realisation of the 5 g vision , and provide an overall outline in figure [ fig : req_mapping ] .the first category of the convergence we discuss is that in the space dimension , which is achieved by integrating the physical architecture of access and metro networks to enable node consolidation , i.e. a significant reduction in the number of central offices .the aim of this convergence is manifold , providing capital and operational cost saving through a massive reduction in the number of electronic port terminations ( i.e. , optical - electrical - optical ( oeo ) interfaces ) , in addition to simplification of network architecture and management .the long - reach passive optical network architecture , originally introduced in , and further developed by the european discus consortium , targets exactly this idea . by extendingthe maximum pon reach to distances above 100 km through the use of in line optical amplifiers in the optical distribution network , lr - pon can transparently bypass the current metro transmission network , linking directly the access fibre to a small number of metro - core ( mc ) nodes , which serve both network access and core . 0.4 0.4 0.4 this evolution is shown in figure [ fig : arch_evolution ] , where starting from the traditional architecture that separates access , metro and core in figure [ fig : arch_today ] , the access - metro convergence operated by lr - pon can remove at least two levels of oeo interfaces in the network leading to the scenario in figure [ fig : arch_lr ] .studies carried out for a number of european countries , , show that typically the number of central offices can be reduced by two orders of magnitude .an additional benefit of lr - pon networks is that by bringing the total number of nodes ( e.g. , within a national network ) below a given threshold ( typically in the order of a hundred ) , it enables a fully flat core network , moving to the architecture represented in figure [ fig : arch_lr_flat ] .the flat core interconnects the metro - core nodes through a full mesh of wavelength channels , using transparent optical switching to bypass intermediate nodes , thus eliminating the inner core oeo interface . with this architecturethe only oeo interface is in the mc node , as packets are only processed electronically at the source and destination nodes .studies carried out in have shown that for values of sustained user traffic above a certain threshold ( about 7mb / s for their scenario ) the flat core becomes more economical than the current hierarchical model based on outer and inner cores .the large reduction of oeo ports can achieve large capital and operational saving , as shown in , and a 10 times reduction in overall power consumption compared to `` business as usual '' scenarios .it should be stressed that this architecture is capable of achieving such advantages because it considers the network from a * converged , end - to - end perspective * , rather than optimising separately access , metro and core .for example , one of the crucial parameters in long - reach access is the optimal value for the maximum optical reach , which can not be determined if the access - metro part is considered independently from the core . the end - to - end perspective , which brings the core into the picture , provides a convincing argument for identifying such parameter , which is the value required to reduce the number of mc nodes to a level that enables their interconnection through a full mesh of wavelengths , ( i.e. , a flat core ) . under such circumstances it was shown in that the deployment of the long - reach architecture can be even more cost effective than fibre to the cabinet , as the cost reduction in the core network can subsidise the cost of deploying fibre infrastructure in the access .in addition the long - reach access is beneficial for lowering cost of fibre deployment in sparse rural areas , a primary target for every government that wants to reduce their extent of financial intervention .the studies mentioned above on the benefits of implementing node consolidation reveal that it can contribute to improving the business case for 5 g , by reducing the overall network cost and energy consumption , thus providing the foundation for an architecture capable to deliver broadband fibre connection to a larger number of users , creating a ubiquitous optical access network ecosystem . the envisaged high - capacity and dynamic multi - wavelength pon architecture also allows flexibility in the type of service that can be offered , as each user end point can avail , on - demand , of different types of services ( e.g. , from dedicated point - to - point to shared wavelength ). however , while this architecture can provide a basis towards the design of future networks , it lacks flexibility in the optical distribution network side , as every user signal needs to be sent to the main central office for further processing .based on this idea , in the architectural concepts were further developed to provide additional flexibility with the inclusion of branching nodes ( bn ) positioned in place of some of the remote nodes in the lr - pon architecture , providing wavelength routing and signal monitoring , in addition to optical amplification .this for example would allow the add / drop of optical signals close to the end point for local processing , when and where required , thus significantly reducing transmission distance to satisfy the requirements of those applications with strict latency constraints .the addition of a flexible bn in the network architecture further contributes to the 5 g poly - morphic design principle .the second category of convergence we discuss is in the networking dimension , aiming towards the integration of different types of networks , mobile , fixed and data centres , to enable end - to - end resource management . among the three convergence dimensions ,this is the one that most impacts the 5 g vision , as it involves integrated development with the wireless access .as mentioned in the introduction , an example of large - scale network convergence was the migration of voice services from circuit - switched to packet switched ip networks , generating substantial savings in cost of network ownership for operators . extending the concept to the access network, a strong integration between mobile and fixed technologies within a ubiquitous fibre deployment provides the resources to serve a wide range of user and services for several years to come , as any point of access can in principle deliver several terabits per second of capacity and bring such capacity from one end of the network to the other .the idea is that to build a flexible network architecture where multiple technologies can converge and provide a pool of diverse resources that can be virtualised , sliced and managed to provide the required end - to - end connectivity to applications .an example of the converged architecture vision is shown in figure [ fig : pon_multi - service ] .in addition , configurability and openness to upcoming technologies is a key factor when considering the unpredictability of technology adoption and financial return on investment. the current debate on gpon upgrade paths is an example .it was previously believed that most operators would skip the xg - pon standard ( defining a single channel operating at 10gbps downstream and 2.5gbps upstream ) and transition from gpon directly to ng - pon2 . however , due to the high initial cost of ng - pon2 equipment and uncertainty on the additional revenue that could be generated , it is likely that residential users will be upgraded to xgs - pon ( one single channel operating with symmetric 10 g rate ) , while , simultaneously , ng - pon2 will be adopted for offering enhanced services and higher flexibility for business users and mobile cell interconnection ( e.g. , thought standard backhaul , fronthaul or midhaul ) .indeed it is envisaged that such higher flexibility will be required in next generation 5 g networks , where a ubiquitous and flexible fibre access network will provide connectivity to services with different capacity , service type and reliability requirements to different end points .for example , respectively , offering capacity of tens of mb / s to hundreds of gb / s , with quality ranging from best effort to assured capacity , with reliability spanning from no protection to dedicated 1 + 1 end - to - end protection , to diverse type of customers such as residential users , mobile stations , business and governmental organisation , data centres and local caches , etc . finally , among the available technologies in the access , we should consider that copper will still play an important role in the near future , as access fibre deployment is usually characterised by a long return on investment , leading some operators to use new high - speed copper access technology as intermediate steps towards fttp . for exampleg.fast is currently being considered to deliver few hundreds mbps over distances up to about 300 m. in many cases g.fast will make use of a pon as backhaul , but use the existing copper pair for the last drop . even faster rates , up to 10gbps , can be achieved over copper with xg.fast technology , which can terminate the xgs - pon or ng - pon2 fibre and use the existing twisted copper pair over few tens of meters to reach the household .this could be used for example in cases where the fibre deployment over the very last tens of meters of the drop are particularly expensive or inconvenient .other users might be connected through a cable system with the new docsis 3.1 standard capable of reaching 10gbps downstream : these can already be backhauled through gpon , while new proposals are currently being developed to guarantee compatibility with ng - pon2 .one interesting scenario of network convergence is the use of pons to provide connectivity to mobile base stations .wireless network capacity has increased over the past 45 years by one million times , a trend also known as cooper s law of spectral efficiency .if we look at how this was accomplished we see that a factor 25 increase was enabled by higher efficiency of transmission technology , an additional factor of 25 by the use of more spectrum , but the vast majority , counting for a factor of 1600 was given by densification of cells , i.e. enabling spacial reuse of frequency channels .looking at the next five years , a major goal of some of the network vendors working towards 5 g is to provide a further capacity increase of up to a thousand times .although it is still uncertain whether this is achievable , any such increase in capacity is likely to follow a similar split between improvement of transmission technology , spectrum resources and densification . beside the challenges with delivering such a high density of capacity on the radio interface ( e.g. , due to interference and frequency reuse issues ) there is a comparable challenge for linking an ever increasing number of small cells to the rest of the network in a way that is cost effective .a solution to this problem that is gaining traction is to use pon networks : even though pons were initially deployed as a means to bring ultra - fast broadband to residential users , the high capacity the fibre provides and the high degree of reconfigurability enabled by the simple power split architecture makes it an ideal candidate also for connecting base stations . while with 3 g systems , base stations were typically connected to the network through backhauling , i.e. , at layer 2 or 3 , massive densification of next generation mobile networks brings new challenges that can not be solved through simple backhauling .deploying a large number of cells can be prohibitively expensive when accounting costs for the base stations and the rental of the space where all its processing equipment is located .a solution that is becoming increasingly popular is to use fronthaul .the idea stems from the extension of a transmission protocol called common public radio interface ( cpri ) that was used to link the antenna at the top of a base station mast with the baseband processing unit ( bbu ) equipment at ground level though optical fibre .this eliminated the need for the radiofrequency ( rf ) interface between antenna and bbu , thus reducing complexity , space requirements , heat dissipation and ultimately costs .the concept was extended to bbu hoteling , where the link between the antenna mast and the bbu is increased to a few kilometres to place bbus from different masts into one building in order to reduce deployment costs and enhance security , reliability and ease of management .it should be stressed however that while adopting smaller cells has the benefit of offering higher data rates per user as the total cell capacity is shared among a smaller number of users , it also reduces cell utilisation as the lower number of connected users reduces the statistical multiplexing gains compared to larger cells .thus the need for additional savings has lead to the concept of bbu pooling , where multiple radio heads ( rrh ) are multiplexed into a smaller number of bbus . besides improving bbu utilisation , by operating statistical multiplexing also within the bbu units ,this mechanism allows centralised control of multiple rrhs , enabling the use of advanced lte - a techniques such as coordinated multi - point ( comp ) , coordinated beamforming , and inter - cell interference cancellation ( icic ) .finally , the third step of this convergence is to run the bbu as software on a virtual machine ( software implementation of the lte stack are already commercially available ) , in public data centres , a concept known as could radio access networks ( c - ran ) .this is fully in line with the 5 g vision of nfv .from a technical perspective , the issue with fronthauling is that it operates by transmitting i / q samples , which increases the transmission rate over fibre by a factor of 30 , compared to backhauling . in addition , since sampling occurs whether or not an actual signal transmission is in place over the radio interface , this rate is fixed , independently of the amount of data used in uplink or downlink by mobile users .thus a large cell providing an aggregate rate of about 10gbps , for example using an 8x8 mimo array , 3 sectors and 5 x 20mhz channels , will require an approximate fronthaul transmission rate of about 150gbps , while a small cell with simpler 2x2 mimo , 1 sector on a single 20mhz channel , would still require 2.5gb / s , while offering a radio data rate below 150mbps .in addition , fronthaul imposes an upper bound to the latency between the remote radio head and the bbu , due to a maximum round trip time ( rtt ) of 3 ms between handset and the hybrid automatic repeat request ( harq ) processing block in the bbu .considering the latency introduced by the different mobile system processing blocks , typically a budget of up to is allowed for the optical transmission system , imposing a maximum distance between rrh and bbu of about 40 km .data compression techniques have been proposed in the literature in order to reduce the large capacity requirements of fronthaul .another concept , known as mid - haul , originally introduced in , that is becoming increasingly popular is moving the physical split between rrh and bbu to a different point of the lte stack .this can reduce the optical transmission rate by a factor of 5 to 10 compared to fronthaul and restore the dependency from the mobile data traffic volume , thus bringing back the possibility of operating statistical multiplexing of a number of mid - haul transmission systems .however there is a trade - off with some of the lte - a functionalities as for example comp performs better for solutions placing the split closer to the rrh . for all cases above , latency remains an open issue : since the harq processing block is located in the mac , a solution that would position the harq in the rrh would be almost equivalent to backhauling .a number of research projects have worked to provide architectural solutions to the fixed - mobile convergence issue .the fp7 combo project has proposed a number of solutions based on pons to bridge the capacity gap between the base station and the central office in cost effective manners .the idea is to use different technologies in the odn : shared twdm pon channels can be used to multiplex backhaul signals from base stations with other users , while fronthaul channels are transmitted through wdm point - to - point channels , in order to meet the strict latency requirements of fronthaul . however using dedicated wavelengths for fronthaulis not always a cost effective solution , especially when the remote radio head is constituted by a small cell that does not require a 10 g channel capacity . the 5g - ppp project 5g - crosshaul is looking at architectures for transmission of both fronthaul and backhaul traffic over a common packet switched network .the development targets a unified data plane protocol and switch architecture that meet the strict latency and jitter demands . from a standardisation perspective ,as previously mentioned , the ieee is currently working on similar topics with the p1914.1 standard for packet - based fronthaul transport networks targeting architecture and technology to allow the transport of next generation fronthaul systems over ethernet .another 5g - ppp project working on fixed / mobile convergence is the 5g - xhaul , which enhances the convergence by introducing point - to - multipoint millimeter - wave systems to provide wireless fronthaul to remote radio head from a macro - cell location .the macro cell is then connected to the bbu pool through a cpri interface across the metro network .the optical network transport is carried out through a time - shared optical network ( tson ) system capable of providing sub - wavelength switching granularity to optimise resource usage .these examples show how the integration of fixed and mobile networks has become a critical factor affecting the success of next generation mobile services .in addition , as nfv is progressively moving network functions from dedicated telecommunications vendor equipments to virtual machines , including data centres into the big picture becomes important .the next section provides additional insights on the integration of data centres in the network convergence . before concluding this section, we would like to focus the reader s attention to the trade - off between node consolidation , which is based on the use of longer access reach technologies , and services like fronthaul , which , due to their tight latency constraint , require a shorter optical reach .we believe that 5 g network design should take both aspects into consideration , introducing in the odn the flexibility to redirect latency - bound signals to local processing nodes , while allowing other signals to benefit from the network equipment consolidation enabled by the use of longer - reach connections .however , this trade - off has not been yet properly investigated and will require further study , which should be addressed by future research projects .considering that much of the 5 g framework revolves around virtualisation of networks and functions , integration of access - metro and data centre networks becomes essential for delivering the end - to - end vision .taking as example a cloud ran system , where the processing chain goes from the radio interface to the server where the networking processing functions are carried out , strict latency requirements for the service can only be met if the network and processing resources can be controlled across the entire chain .as nfv moves network processing towards general purpose servers , effective scale up of processing power will require a re - design of the central office architecture , which will progressively migrate towards a typical data centre network , a view shared by the central office re - architected as a datacentre ( cord ) collaborative project between at&t and on.lab .integration of virtual resources from the wireless and optical domains to the dc was also one of the main research themes of the european content project , which introduced the idea of using the above mentioned tson in the metro node for convergence of wireless , optical and dc resources .the content approach shows that it is possible to satisfy future content distribution without the use of cloudlets , thus reducing overall energy consumption and increasing resource usage efficiency , while only paying a small penalty in overall latency .it should be noticed that these studies predate the 5g - xhaul project and focused on lte backhaul , thus latency requirements were not an issues as for fronthaul . when considering x - haul of next generation 5 g systems , as anticipated in figure [ fig : pon_multi - service ] , it is expected that a 5 g flexible architecture should be capable of offering data processing elements in different parts of the network , and assign them to the application depending on its latency and capacity requirement .it should be expected that some network functions will be processed near the access point , some in the odn near a branching or remote node , some at the central office and others in larger data centres . due to the increasing role data centres are playing in the converged network vision , the optical networking community has dedicated substantial effort in exploring novel technologies and architectures for faster interconnection within the data centre ( intra - dc ) and among data centres ( inter - dc ) .the former has received particular attention on projects targeting next generation exascale computing systems , through network architectures that make increasingly use of optical switching technology .this includes study of hybrid electronic - optical switching architectures , , and circuit switching technologies ranging from 3d micro electro mechanical systems ( mems ) and fibre steering , capable of 10 - 20 ms switching times , to faster 2d mems capable of few microseconds switching times , to the combination of fast laser wavelength tuning and array wave guide ( awg ) with nanosecond switching times .as the intra - dc network achieves faster optical switching , it becomes interesting to investigate the use of this technology outside the data centre , not only for connections among dcs but between dcs and any other 5 g infrastructure , which we dub * _ dc - to-5 g communication_*. work in has demonstrated the sdn - driven inter - dc switching of transparent end - to - end optical connections across multiple metro nodes , using an architecture similar to that shown in figure [ fig : dc - metro - arch ] , with commercial reconfigurable add drop multiplexers ( roadms ) for routing wavelengths across the nodes .the authors showed an overall path setup time of about 88ms , mostly dominated ( 72 ms ) by the reconfiguration time of the roadms and optical switch , showing the potential for fast inter - dc optical circuit switching for applications such as virtual machine migration .the ultimate vision is that of an agile access - metro optical network capable of transparently interconnecting end points , for example a rrh to the server or rack in the dc where the bbu is located , avoiding any intermediate packet processing , thus eliminating any further transport delay .a use case is shown in figure [ fig : dc - metro - arch ] , operating over an architecture on demand type of node , where an optical space switch is used to enable dynamic reconfiguration of optical links , where wavelengths can be routed to the desired destination , at an access pon , at a dc , or else towards the network core ( after undergoing any required aggregation into larger data streams , protocol conversion or other type of signal processing ) .the use case merges the concepts of c - ran and nfv through the virtualisation of the pon optical line terminal ( olt ) .it considers a number of small cells in a given area using mid - haul as transport mechanism ( traffic is only shown in the upstream direction for simplicity ) : the traffic they generate on the transport link is three to six times higher than the traffic in the radio access side and is proportional to it .when the traffic from those cells is too low to justify the use of dedicated wavelengths for each cell , their mid - haul traffic is aggregated over a tdm pon .the idea is to operate transparent switching of the pon wavelength channel at the access - metro node , terminating it directly into a rack or server in the dc ( the violet link in figure [ fig : dc - metro - arch ] ) , which implements both olt and bbu ( purple line terminating on purple olt and bbu boxes ) .the colocation of olt and bbu will minimise end - to - end latency , allowing for example synchronisation of olt and bbu scheduling mechanisms or advanced phy functions such as comp and coordinated beamforming among the bbus .a sudden increase in the small cells traffic however will likely create congestion in the pon , as for example a cell offering a peak rate of 1 gbps could increase the load in the pon by 3 gbps .the associated mid - haul traffic should thus be quickly allocated more capacity , by moving the transmission wavelength of the rrh to a dedicated channel ( e.g. , a 10 g point - to - point channel ) , which gets transparently switched in the access - metro node towards the same bbu server ( yellow line terminating on yellow bbu box ) , or if needed to a different server or even dc ( red line ) , although this would require ultra - fast virtual machine migration .we believe the switching time should be at least comparable with that of the lte scheduling time , i.e. , of the order of the millisecond , in order to achieve seamless transition .this requires the optical switches in the access - metro node and the dc to operate in the tens to hundreds of microseconds , suggesting the use of 2d - mems type of devices , and the implementation of an agile control plane capable of supporting this fast switching ( switching control times of 10 microseconds were demonstrated for intra - dc environments in ) .another issue worth mentioning is that mid - haul systems , similarly to fronthaul ones , require low transport latency ( few hundreds microseconds ) , which can not be achieved with dynamic bandwidth assignment ( dba ) mechanism available it today s pons ( operating on the order of few milliseconds ) .however studies in have shown the possibility to reduce such values to few tens of microseconds by using dba algorithms synchronised with the bbu scheduler . finally ,while this section has focused on a c - ran case study , the concept of access - metro and agile data centre convergence can support other scenarios .figure [ fig : conn - type ] provides further examples , showing the potential mapping between services and transmission channels ( i.e. , shared twdm pon channels , dedicated wavelength channels and dedicated fibre links ) . in addition , while it is recognised that sub - millisecond switching might be still a few years away in access - metro nodes , in the shorter term the ability to provide dynamic optical switching at lower speed will still prove useful to operate dynamic resource allocation at the network management level . in conclusion ,this section has shown how the seamless integration of wireless , optical and data centre networking technologies will be instrumental to provide the necessary network agility to satisfy dynamic end - to - end allocation of connections with assured data rate and latency requirements . the increasing use of transparent optical switching in the metro architecture will increase the ability to adopt new technologies as they become available .in addition , the increasing use of virtualisation to provide independency among network resource slices and end - to - end control of the leased resources will pave the way to application - oriented , and multi - tenant solutions , further described in the next section .the last aspect of the convergence we want to discuss in this paper is in the ownership dimension , i.e. the ability of the network to support multi - tenancy .a simplified though common division of network ownership domains is in three layers : * the passive network infrastructure : ducts , sub - ducts , fibre with any passive splitter element , and copper on some legacy networks ; * the active network infrastructure : transmission and switching equipment ; * the service layer : the provisioning of services to the end user .multi - tenancy in access and metro networks has been extensively studied in the literature - and figure [ fig : open - access - models ] shows different possible ownership models , ranging from total vertical integration , typical of incumbent operators that own all three layers , to complete separation where each layer is owned by a different entity . while it is out of the scope of this paper to further investigate the pros and cons of these models, we argue that open access , at least of the active network infrastructure , is required in order to share infrastructure costs and open up the market to better competition .access network sharing has been implemented in the past , though at a basic level , as telecommunications regulators have enforced local loop unbundling , ( operating at the level of the passive infrastructure ) and bitstream services ( at the active infrastructure level ) .active network sharing has gained popularity as it allows service providers ( sps ) to sell bandwidth services without owning physical network infrastructure , and the bitstream service has further evolved to virtual local loop unbundling ( vula ) and next generation access ( nga ) bitstream . however , although they give service providers more control , by adding some quality of service differentiation , they do not offer the ability to fully customise services to provide highly differentiated products to the users , and rely on the infrastructure owner for functions such as performance and fault monitoring , which are essential for serving the business sector .work is currently underway by standardisation bodies such as the broadband forum under the fixed access network sharing study group to increase the control of sps over the access network through virtualisation .the vision is that of an ownership model differentiating between : * an infrastructure provider ( ip ) that owns and maintains physical networking infrastructure ( for example the passive and active network infrastructure ) , enables physical resource virtualisation and carries out the network virtualisation , provides virtual resources controlling apis for the virtual network operator ( vno ) and gets its revenue by leasing resources to the vno . *a virtual network operator that operates , controls and manages its assigned virtual network instance , is able to run and re - designs customised protocols in its own virtual networks , provides specific and customised services through its own virtual networks , receives revenue from the end users and pays the usage of network resources to the ip , while saving on network infrastructure deployment costs .the proposed roadmap is through three sequential steps , as suggested in : 1 .the first step is to reuse existing network equipment controlled by the infrastructure provider through their management system : virtual network operators could get access to the network through a standard interface which can provide raw access to the management layer with optional monitoring and diagnostic functionalities .2 . the second step is to deploy new hardware in the access node capable of resource virtualisation , so that the virtual network operators could be assigned a virtual network slice and get full control of the equipment . for this stephowever the interface remains , similarly to step 1 ) , to the network operator management system .the third and final step is the full sdn integration , where the virtual operators access their network slice through a flexible sdn framework with standardised apis . in conclusion, multi - tenancy enables network infrastructure sharing , reducing the cost of ownerships and opening the market to a plurality of new entities that can provide the diversity that is necessary to empower the 5 g vision .although we do not suggest that the three dimensions of convergence described in this paper constitute a complete set for satisfying all foreseeable requirements of 5 g networks , we argue that they can contribute to its realisation .a summary of the mapping between the described convergence dimensions and the 5 g requirements , grouped into three activity areas as outlined in section iv , is reported in figure [ fig : req_mapping ] .after reporting on a number of research activities on network convergence and showing how they can contribute to meet future 5 g requirements , we finalise the paper by briefly considering two further aspects .the first is a contemplation of the role that sdn could play in this future convergence and the second is a vision on future end - to - end virtualisation and associated business models .we believe sdn will play a pivotal role in the multi - dimensional convergence discussed above , both as the mechanism to orchestrate the interaction among the different network domains and technologies , and the system to integrate and automate many of the control and management operations that are today still carried out through proprietary and closed interfaces . while sdn has experienced a hype in the past few years ( which has also caused disagreement over its capabilities ) there is now firm interest from industry , with concepts moving from research to production environment , and with many related working groups being established in different standardisation bodies , including etsi , onf , bbf and ieee . the number of enterprises , including many startups , working profitably in the sdn industry is constantly increasing , with an estimated overall value of 105 billion by 2020 . besides reducing cost of network ownership , by making more efficient use of the network capacity , reducing energy consumption , and enabling the convergence of multiple services ( business and residential ) into the same physical infrastructure, sdn will generate new form of revenues , by allowing operators to quickly deploy new services in their existing infrastructure .one of these will come from enabling end - to - end virtualisation of the network , which will provide the opportunity for new business models for network sharing .this process , started in the u.s . with the physical unbundling enforced by the telecom deregulation act in 1996 , is now evolving to virtualised bitstream concepts offering higher flexibility and product differentiation , and will continue to evolve towards full end - to - end network virtualisation .the idea is to break down the network into a pool of virtual resources whose lease is negotiated to provide dedicated end - to - end connections , across multiple network domains and technologies .for example a customer could submit a service request through a user portal that is forwarded to a network orchestrator acting as a resource broker .a digital auction is then carried out to secure a chain of virtual resources to provide an ephemeral end - to - end connection that only lasts for the time required by the service , releasing all resources immediately after the service terminates . in principlethis could be carried out with high granularity targeting network connection performances of individual flows , although more work is required to determine its scalability .the practical implementation of such concept will likely require a shift towards business models that better reflect the value chain of the service offered .from an end user perspective , the value is in the service delivered and its expected performance .acceptable performance values such as data rate , latency and packet loss vary among services and thus need to be assured on a service base and not in average across all services delivered to an end point .for example the value per bit of a medical application monitoring and transmitting vital body parameters to a medical centre will be much higher than that of a video streaming service .it thus makes sense for the end user to pay for the service rather than for the network connectivity .the network should be paid by the entity providing the service which would select the most appropriate quality of service that assures reliable service delivery ( thus paying different price levels for personalised ephemeral connections as mentioned above ) .this will make network operations completely transparent to the end users , which only interacts with the services and application of interest .some basic forms of this idea have already been implemented by industry .for example , users of the amazon kindle 3 g pay for the purchase of their books but not for the network delivering it ; facebook zero and google free zone are similar examples where the user is able to access a small set of their services without paying for the underlying network .even if an agreement on potential business models is still far ahead , the idea of providing the option of better quality of service for targeted applications is gathering interest among network operators , as many believe sdn can now provide the level of programmability and network automation required to practically implement this idea on a large scale .finally , while it is still an open question whether sdn will be capable of enabling the full network convergence and end - to - end virtualisation envisaged in future 5 g networks , we would like to justify this optimism by pointing out the successes it has already enabled .a pragmatic analysis reveals that sdn has so far boosted research in several areas of networking , by producing a framework where testing of new algorithms , protocols and architectures , can be quickly moved from simulation / emulation environments ( e.g. , using the mininet platform ) to real testbed scenarios , , enabling large - scale interoperability tests among different research groups and physical laboratories , .the main contribution of this paper was to present a novel perspective on network convergence , proposing a multi - dimensional , end - to - end approach to network design to enable future 5 g services . while providing a tutorial of the technologies , architectures and concepts revolving around network convergence , we have presented a number of high level concepts that we believe are of paramount importance in the 5 g vision .one of the key messages is that an un - converged view of the network based on the piece - wise development of the individual segments is suboptimal and can lead to uneconomical decisions .a typical example is that viewing the wireless domain only as a client of the optical domain has lead to issues in performance and deployment cost .network architectures should be designed with an end - to - end vision in mind , as the real value for the end user comes only from a consistent support of the application from its source to its destination .this is becoming increasingly clear among the networking community , as for example the european commission is promoting an increasing number of ict funding calls targeting converged fixed and wireless architectures towards 5 g .finally , we have identified many open areas of research , which we believe will become increasingly relevant in the coming years .we conclude the paper outlining a few : * aggregation of mobile mid - haul links over twdm pons ; * end - to - end optimisation of dynamic resource allocation , including wireless spectrum and antenna resources , optical transport and processing resources in the data centre ; * architectural solutions capable of merging the benefits of node consolidations with latency - bound requirements of some of the 5 g services and applications ; * joint development of protocols and scheduling algorithms across the fixed and wireless domains ; * convergence of access - metro and dc through faster optical switching enabling dc - to-5 g communication ; * scalable , sdn - controlled , end - to - end qos assurance ; * new application - centric business models .the author would like to thank prof .david b. payne for the fruitful discussions on the long - reach passive optical network architecture and prof .linda doyle for the insightful conversations over 5 g networks and services .1 s. chatzi , et al . , a quantitative techno - economic comparison of current and next generation metro / access converged optical networks .paper we.8.b.2 , ecoc 2010 .m. ruffini , et al . ,discus : an end - to - end solution for ubiquitous broadband optical access .ieee com . mag .2 , february 2014 .s. gosselin , et al ., fixed and mobile convergence : which role for optical networks ?ieee / osa jocn , vol . 7 , no . 11 , november 2015 .m. ruffini , metro - access network convergence , ofc 2016 , invited tutorial th4b.1 .ngmn alliance 5 g white paper , february 2015 .available at : https://www.ngmn.org/uploads/media/ngmn_5g_white_paper_v1_0.pdf imt vision - framework and overall objectives of the future development of imt for 2020 and beyond .itu - r recommendation m.2083 - 0 , september 2015 .5 g empowering vertical industries .5g - ppp report available at : https://5g-ppp.eu/wp-content/uploads/2016/02/brochure_5ppp_bat2_pl.pdf european commission , a digital agenda for europe , august 2010 .ngmn alliance white paper , 5 g prospects .key capabilities to unlock digital opportunities .july 2016 , available at : https://www.ngmn.org/uploads/media/160701_ngmn_bpg_capabilities_whitepaper_v1_1.pdf b. cornaglia , et al ., fixed access network sharing .elsevier oft , vol .26 , part a , dec . 2015 .integrated services in the internet architecture : an overview , ietf rfc 1633 , june 1994 . an architecture for differentiated services , ietf rfc 2475 , dec . 1998 .at&t vision alignment challenge technology survey , at&t domain 2.0 vision white paper , november 2013 , available at : https://www.att.com/common/about_us/pdf/at&t%20domain%202.0%20vision%20white%20paper.pdf m. fishburn , broadband assured ip services framework , sd-377 draft revision 14 , april 2016 .cisco vni index http://www.cisco.com/c/en/us/solutions/service-provider/visual-networking-index-vni/index.html etsi , requirements for further advancements for evolved universal terrestrial radio access ( e - utra ) ( lte - advanced ) , release 13 , dec . 2015 .j. lee , et al ., lte - advanced in 3gppp rel-13/14 : an evolution toward 5 g .ieee comms . mag . -communications standard supplement , march 2016 .5 g automotive vision .5g - ppp white paper , october 2015 . itu - t 10-gigabit - capable symmetric passive optical network ( xgs - pon ) , g.9807.1 , feb . 2016 .j.s.wei , et al ., physical layer aspects of ng - pon2 standards .part 1 : optical link design .ieee / osa jocn , vol . 8 , no .1 , jan 2016 .d. nesset , the pon roadmap , ofc 2016 , paper w4c.1 etsi network functions virtualisation ( nfv ) ; architectural framework . oct .2013 , available at : http://www.etsi.org/deliver/etsi_gs/nfv/001_099/002/01.01.01_60/gs_nfv002v010101p.pdf p. rost et al . , benefits and challenges of virtualization in 5 g radio access networks .ieee comms . mag .- communications standard supplement , december 2015 .esti white paper , network functions virtualisation ( nfv ) .october 2013 , available at http://portal.etsi.org/nfv/nfv_white_paper2.pdf esti white paper , network functions virtualisation ( nfv ) .october 2014 , available at http://portal.etsi.org/nfv/nfv_white_paper3.pdf d. b. payne and r. p. davey , the future of fibre access systems ? , bt tech journal , 20(4 ) , 104 - 114 , 2002 .a. nag , et al . , n:1 protection design for minimizing olts in resilient dual - homed long - reach passive optical network .ieee / osa jocn , vol . 8 , no .2 , february 2016 .discus project deliverable d4.10 , core network optimisation and resiliency strategies , april 2015 . c. raack , et al ., hierarchical versus flat optical metro / core networks : a systematic cost and migration study .ieee ondm 2016 .d.b.payne , fttp deployment options and economic challenges , ecoc 2009 , paper 1.6.1 .discus project deliverable d2.8 , discus end - to - end techno - economic model , february 2016. n. amaya , et al ., introducing node architecture flexibility for elastic optical networks . ieee / osa jocn vol.5 , no . , 6 , june 2013 . t. pfeiffer , a physical layer perspective on access network sharing .elsevier oft , vol .26 , part a , dec . 2015 . itu - t 10-gigabit - capable passive optical networks ( xg - pon ) , g.987.1 , march 2016 . t. pfeiffer , next generation mobile fronthaul and midhaul architectures , ieee / osa jocn , vol . 7 , no .11 , nov . 2015 .itu - t fast access to subscriber terminals ( g.fast ) , g.9701 , dec .w. coomans , et al . , xg - fast : towards 10 gb / s copper access , globecom 2014 .ansi / scte 174 , radio frequency over glass fiber - to - the - home specification , rfog , 2010 .e. dai , reclaim rfog spectra for 100 g epon with pon docsis backhaul ( pdb ) , nov .2015 , available at http://ieee802.org/3/ngeponsg/public/2015_11/ngepon_1511_dai_2.pdf nokia white paper , technology vision : networks that deliver gigabytes per user per day profitably and securely , 2015 , available at : http://resources.alcatel-lucent.com/asset/200281 http://amarisoft.com/ etsi gs ori 001 , requirements for open radio equipment interface ( ori ) , oct .2014 , available at : http://www.etsi.org/deliver/etsi_gs/ori/001_099/001/04.01.01_60/gs_ori001v040101p.pdf u. dotsch , et al ., quantitative analysis of split base station processing and determination of advantageous architectures for lte .bell labs tech journal vol .18 no . 1 , 2013 .k. miyamoto , et al ., split - phy processing architecture to realize base station coordination and transmission bandwidth reduction in mobile fronthaul .ofc 2015 , paper m2j.4 .j. kani , solutions for future mobile fronthaul and access - network convergence , ofc 2016 , paper w1h.1 .l. cominardi , et al ., 5g - crosshaul : towards a unified data - plane for 5 g transport networks .ieee eucnc , june 2016 .j. gitierrez , et al ., 5g - xhaul : a converged optical and wireless solution for 5 g transport networks .wiley transactions on emerging telecommunications technologies , july 2016 .cord : the central office re - architected as a datacenter .onos white paper , 2015 .available at : http://onosproject.org/wp-content/uploads/2015/06/poc_cord.pdf m.p .anastasopoulos , et al . , energy - aware offloading in mobile cloud systems with delay considerations .globecom workshop on cloud computing systems , networks , and applications , 2014 .rofoee , et al . , hardware virtualized flexible network for wireless data - center optical interconnects ( invited ) .ieee / osa jocn , vol . 7 , no .3 , march 2015. n. farrington , et al ., helios : a hybrid electrical / optical switch architecture for modular data centers , acm sigcomm computer communication review , vol .4 , oct . 2010 . k. christodoulopoulos , et al ., performance evaluation of a hybrid optical / electrical interconnect .ieee / osa jocn , vol . 7 , no .3 , march 2015 .wu , et al . , large - port - count mems silicon photonics switches , ofc 2015 , paper m2b.3 . ieee / osa jocn , vol.7 , no .3 , march 2015 .p. samadi , et al ., software - defined optical network for metro - scale geographically distributed data centers .osa optics express , vol .24 , no . 11 , may 2016 .g. porter , et al . , integrating microsecond circuit switching into the data center , acm sigcomm computer communication review , vol .4 , oct . 2013 .t. tashiro et al . , a novel dba scheme for tdm - pon based mobile fronthaul , ofc 2014 , paper tu3f.3 .m. van der wee , et al ., techno - economic evaluation of open access on ftth networks , ieee / osa jocn , vol . 7 , no .5 , may 2015 .m. forzati , et al . ,next - generation optical access seamless evolution : concluding results of the european fp7 project oase , ieee / osa jocn , vol .2 , feb . 2015 .m. ruffini and d.b .payne , business and ownership model case studies for next generation ftth deployment .discus fp7 project white paper , jan .available at : http://img.lightreading.com/downloads/business-and-ownership-model-case-studies-for-next-generation-ftth-deployment.pdf d.b .payne and m. ruffini , local loop unbundling regulation : is it a barrier to ftth deployment ?discus fp7 project white paper , jan .2016 , available at : http://img.lightreading.com/downloads/local-loop-unbundling-regulation-is-it-a-barrier-to-ftth-deployment.pdf ftth council , ftth business guide , edition 5 , financing committee , feb .available at : http://www.ftthcouncil.eu/documents/publications/ftth_business_guide_v5.pdf p. baake , et al ., local loop unbundling and bitstream access : regulatory practice in europe and the u.s .diw berlin : politikberatung kompakt 20 , sept .new paradigms for nga regulation : next - generation bitstream , virtual unbundling , sub- loop unbundling .alcatel - lucent white paper , oct .2012 , available at : http://berec.europa.eu/eng/document_register/subject_matter/berec/download/0/1061-response-by-alcatel-lucent-to-berec-2012_0.pdf b. cornaglia , fixed access network sharing - architecture and nodal requirements , wt-370 , revision2 , apr .b. cornaglia , fixed access network sharing .workshop on fibre access and core network evolution : what are the next steps towards an integrated end - to - end network ? , ecoc 2015 .sdxcentral , sdn and nfv market size report , 2015 edition .available at : https://www.sdxcentral.com/wp-content/uploads/2015/05/sdxcentral-sdn-nfv-market-size-report-2015-a.pdf federal communications commission , fcc implementation schedule for the telecommunications act of 1996,tech .available at : http://transition.fcc.gov/reports/implsched.html oshare science foundation ireland project 14/ia/2527 , available at : www.oshare.ie j. m. marquez - barja , et al ., decoupling resource ownership from service provisioning to enable ephemeral converged networks ( ecns ) .ieee eucnc , june 2016 .f. slyne and m. ruffini .flatland : a novel sdn - based telecoms network architecture enabling nfv and metro - access convergence .ieee ondm , may 2016 .b. lantz , et al . ,a network in a laptop : rapid prototyping for software - defined networks .acm workshop on hot topics in networks , oct . , 2010 .m. ruffini , et al . , software defined networking for next generation converged metro - access networks .elsevier oft , vol .26 , part a , dec .s. mcgettrick , et al ., experimental end - to - end demonstration of shared n:1 dual homed protection in long reach pon and sdn - controlled core .ofc 2015 , paper tu2e.5 .r. vilalta , et al . , the need for a control orchestration protocol in research projects on optical networking .ieee eucnc , july 2015 . j. m. gran josa , et al . ,end - to - end service orchestration from access to backbone .ieee ondm , may 2016 .marco ruffini received his m.eng . in telecommunications in 2002 from polytechnic university of marche , italy . after working as a research scientist for philips in germany, he joined trinity college dublin ( tcd ) in 2005 , where he received his ph.d . in 2007 . since 2010 , he has been assistant professor ( tenured 2014 ) at tcd .he is principal investigator at the ctvr / connect telecommunications research centre at tcd , currently involved in several science foundation ireland ( sfi ) and h2020 projects , and leads the optical network architecture group at trinity college dublin .he is author of more than 80 journal and conference publications and more than 10 patents .his research focuses on flexible and shared high - capacity fibre broadband architectures and protocols , network convergence and software defined networks control planes .
|
future 5 g services are characterised by unprecedented need for high rate , ubiquitous availability , ultra - low latency and high reliability . the fragmented network view that is widespread in current networks will not stand the challenge posed by next generations of users . a new vision is required , and this paper provides an insight on how network convergence and application - centric approaches will play a leading role towards enabling the 5 g vision . the paper , after expressing the view on the need for an end - to - end approach to network design , brings the reader into a journey on the expected 5 g network requirements and outlines some of the work currently carried out by main standardisation bodies . it then proposes the use of the concept of network convergence for providing the overall architectural framework to bring together all the different technologies within a unifying and coherent network ecosystem . the novel interpretation of multi - dimensional convergence we introduce leads us to the exploration of aspects of node consolidation and converged network architectures , delving into details of optical - wireless integration and future convergence of optical data centre and access - metro networks . we then discuss how ownership models enabling network sharing will be instrumental in realising the 5 g vision . the paper concludes with final remarks on the role sdn will play in 5 g and on the need for new business models that reflect the application - centric view of the network . finally , we provide some insight on growing research areas in 5 g networking . shell : bare demo of ieeetran.cls for ieee communications society journals convergence , access - metro , next - generation 5 g , multi - service , multi - tenancy , consolidation , sharing , end - to - end , datacentre .
|
kinetic schemes are widely used for studying the thermodynamic , dynamic , and stochastic properties of macromolecules .these schemes are usually selected to be as simple as possible , such as the 2-state schemes for the bound and unbound states of enzymes or receptors and the open and closed states of ion channels .nevertheless , they can also be rather sophisticated ( e.g. , 8-state inositol trisphosphate receptors , the 10-state hemoglobin , and the 56-state chloride channels ) .the selection of kinetic schemes is mainly determined by the desired accuracy and the measurable quantities . sincea low - dimensional scheme can usually be contracted from higher - dimensional ones , there exists a cascade of hierarchical markovian network models suitable for describing the time evolution of the populations of a macromolecule s functional states .these networks are anticipated to have indistinguishable kinetics , exhibiting identical mean trajectories after being projected to the low - dimensional network space .however , models with indistinguishable means do not necessarily have indistinguishable fluctuations . a question that arisesis that which schemes will give more relevant fluctuations to a real system and under which conditions unique fluctuation features can be obtained from different levels of contracted schemes ?these issues are essential for the reliability of various biological properties derived in terms of the fluctuations of a selected kinetic scheme , such as chemoreception , membrane conductance , and ion channel density .the inter - network fluctuation relations arise from a comparison between different coarse - grained dynamical systems .it resembles the comparison between different rate equations in the lumping analysis , widely used in systems biology and general chemical engineering .a central issue in that analysis is finding the lumping conditions for eliminating unimportant events or time scales in a large network , of typically over species in systems biology , to reduce its complexity .interestingly , this contraction is mathematically analogous to merging experimentally indistinguishable states to obtain simple transition networks for the conformational change of a macromolecule .for instance , the hodgkin - huxley potassium ion channel has configurations depending on whether its individual four gates are open or closed . however , this channel is often regarded as a 2-state system , described by whether or not ions can pass through it in a patch - clamp recording . the contraction from a 16-state to a 2-state modelis because the gating current recording is incapable of resolving the detailed structure of the channel configuration . in terms of lumping analysis , this contraction is an approximate lumping . despite that correspondence ,the original lumping analysis focuses on the relations between mean dynamics and is not concerned with fluctuations . to extract this stochastic component, we generalize the lumping theory from original rate equations ( re ) to chemical master equations ( cme ) and stochastic differential equations ( sde ) and study kinetically equivalent ( ke ) and thermodynamically equivalent ( te ) hierarchical kinetic schemes , under intrinsic and extrinsic noises .the results go beyond the conventional assumption of fast relaxations " and contribute to our understanding of why a kinetic system can be contracted . in the case of extrinsic noise , different kinetic schemes can give different fluctuations even when their average trajectories are the same .this opens a possibility of identifying a correct kinetic model by observing fluctuations .notably , lumping conditions here are used for generating complex ke or te networks from simple networks , in opposite to their original goal of reducing complex networks to simple networks . furthermore , for the conformational change of macromolecules discussed below , it is sufficient to focus on linear res and linear lumping transformations .let system be an -dimensional kinetic scheme described by the linear re , where is the population of the -th state and may represent the mean dynamics of some stochastic processes discussed later , denotes the matrix of rate constants from states to , with , and ^t ] via which is the state vector of some reduced system .if each column of is a standard unit vector , denotes a proper lumping ( see the example in s1 ) . since all lumpings in the following discussions are proper , " this term will be neglected below .the re which satisfies is generally an integral - differential equation with a memory kernel .if that kernel vanishes , the re has a simple autonomous form as ( 1 ) , with , and network is called `` exactly lumpable . ''exact lumping makes the contracted system of an autonomous system again autonomous , self - contained , and not having a memory kernel . if the memory kernel does not vanish but is small , is called `` approximately lumpable , '' which has a broad practical application .exact lumping is the limiting case of all approximate lumpings when the memory effect tends to zero . equations ( 2 ) and ( 3 ) together constitute the ke condition between and , or the condition for which can be exactly lumped into .notice that ( 2 ) alone is insufficient for this condition , because any can lump into some , which is not necessarily self - contained .quantitatively , the ke condition between and can be expressed by their rate constant matrices which implies .when is used to lump into , the states in are first partitioned into sets , with , by the row vectors of ( see s1 ) .then , all states in are merged as the state in and termed the internal states " of . using the same procedure to merge all states in on both sides of ( 1 ) , one obtains the ke condition in terms of rate constants for any , with and any , in analogy to that known for finite markov chains .notice that the ke condition is fulfilled only when ( 5 ) is satisfied for all . in brief, the ke condition can be expressed as ( 4 ) or ( 5 ) , or equivalently as ( 2 ) together with ( 3 ) . since ( 5 )does not demand fast relaxations between the internal states in , the existence of fast variables or large is not the prerequisite for exact lumpability .however , lumping analysis can also eliminate fast variables , as the quasi - equilibrium or quasi - steady - state approximations do .given a , whether described by ( 1 ) can be exactly lumped into by is decided by whether has an autonomous re ( 3 ) , as discussed above . if two autonomous and are given first instead , whether can be lumped into is decided by whether some can be found to connect them by ( 5 ) .if such exists , of and of are indistinguishable , in that the trajectories and are identical .to extract the fluctuation relations of intrinsic noises between hierarchical networks , we extend the lumping analysis from the re ( 1 ) to its cme .suppose a macromolecule has conformational states whose transition network obeys the kinetic equation ( 1 ) .if a system consists of macromolecules , its cme , , \nonumber\end{aligned}\ ] ] describes the evolution of the joint probability of finding the state vector ^t ] and or a two - state system with =[10,12] ] .network can be contracted into a two - dimensional te network ( ) by merging states and ( and ) of into state of ( ) and renaming state ( ) of as state of ( ) .the probability , , of finding macromolecules in the -th state at time is estimated by counting the frequency of that event when the system evolves times .two initially distinct distributions of ( blue ) and of ( green ) , as well as of ( red ) and of ( yellow ) , approach each other as .this example demonstrates the increasing lumpability between the probabilities of two te networks , as indicated by ( 7 ) and ( 13 ) , in terms of the marginal probability ., scaledwidth=49.0% ]
|
conventional studies of biomolecular behaviors rely largely on the construction of kinetic schemes . since the selection of these networks is not unique , a concern is raised whether and under which conditions hierarchical schemes can reveal the same experimentally measured fluctuating behaviors and unique fluctuation related physical properties . to clarify these questions , we introduce stochasticity into the traditional lumping analysis , generalize it from rate equations to chemical master equations and stochastic differential equations , and extract the fluctuation relations between kinetically and thermodynamically equivalent networks under intrinsic and extrinsic noises . the results provide a theoretical basis for the legitimate use of low - dimensional models in the studies of macromolecular fluctuations and , more generally , for exploring stochastic features in different levels of contracted networks in chemical and biological kinetic systems .
|
wireless systems are becoming increasingly interference limited rather than noise limited , attributed to the fact that the cells are decreasing in size and the number of users within a cell is increasing . mitigating the impact of interference between transmit - receive pairs is of great importance in order to achieve higher data rates . describing the complete capacity region of the interference channel remains an open problem in information theory .for very strong interference , successive cancellation schemes have to be applied , while in the weak interference regime , treating the interference as additive noise is optimal to within one bit . treating the interference as noise ,the achievable rates region has been found in to be the convex hull of hyper - surfaces .the rates region bounded by these hyper - surfaces is not necessarily convex , and hence a convex hull operation is imposed through the strategy of time - sharing .this paper adopts a novel approach into simplifying this rates region in the space by having only on / off binary power control . limiting each of the transmitters to a transmit power of either or , this consequently leads to corner points within the rates region . andby forming a convex hull through time - sharing between those corner points , it thereby leads to what we denote a crystallized rates region .utility maximization using game - theoretic techniques has recently received significant attention .most of the existing game theoretic works are based on the concept of nash equilibrium . however , the nash equilibrium investigates the individual payoff and might not be system efficient , i.e. the performance of the game outcome could still be improved . in 2005 ,nobel prize was awarded to robert j. aumann for his contribution of proposing the concept of correlated equilibrium . unlike nash equilibriumin which each user only considers its own strategy , correlated equilibrium achieves better performance by allowing each user to consider the joint distribution of the users actions . in other words ,each user needs to consider the others behaviors to see if there are mutual benefits to explore . likewise , mechanism design ( including auction theory ) is a subfield of game theory that studies how to design the game rule in order to achieve good overall system performance .mechanism design has drawn recently a great attention in the research community , especially after another nobel prize in 2007 .the paper presents three contributions with the following structure : 1 .section [ sec : crystallized ] introduces the concept of crystallized rates region with on / off power control .2 . section [ sec : ce ] applies the game theoretic concept of correlated equilibrium ( ce ) to the rates region problem .the ce exhibits the property of forming a convex set around the corner points , hence fitting suitably in the crystallized rates region formulation .3 . using mechanism design , section [ sec : mechanismdesign ] presents an example in applying these two concepts for the channel and formulates the vickrey - clarke - groves auction utility . to find the solution point distributively ,the regret matching learning algorithm is employed by virtue of its property of converging to the correlated equilibrium set .section [ sec : simulation ] demonstrates the ideas through simulation , and section [ sec : conclusion ] draws the conclusions .a interference channel is illustrated in fig .[ fig:2userssys ] .user transmits its signal to the receiver .the receiver front end has additive thermal noise of variance .there is no cooperation at the transmit , nor at the receive side .the channel is flat fading . for brevity , , , and represent the channel power gain _normalized _ by the noise variance .explicitly , , , , and , where is the channel gain from the transmitter to the receiver .user transmits with power , and it has a maximum power constraint of . user interference channel ] in an effort to keep the complexity of the receivers fairly simple , the interference is treated as noise .such case is encountered in sensor networks and in cellular communication where it is desired to have low power - consuming and correspondingly low complexity receivers .therefore , with the power vector ^t ] , , denote the _ system _ time - sharing coefficients vector of the respective corner points ( user 1 transmitting only with a time - sharing coefficient ) , ( user 2 transmitting only with a time - sharing coefficient ) , and ( both users transmitting with a time - sharing coefficient ) .the reason is labeled a _ system _ time - sharing coefficients vector is to emphasize the combinatorial element in constructing the corner points , where the cardinality of . then for case , in contrast with eq .( [ ratesequations ] ) , the new crystallized rates equations for and are : any solution point on the crystallized frontier would lie somewhere on the time - sharing line connecting two points for the case ; and similarly for the case , the solution point lies somewhere on a time - sharing plane connecting three points , then by deduction we obtain the following corollary : [ cor : max_n_nonzero ] the system time - sharing vector , for any solution point on the crystallized rates region , has at maximum nonzero coefficients out of its elements . examining the crystallized rates region in more details for the interference channel , we evaluate the area of the rates region bounded by the potential lines and achieved through power control , and the area of the rates region formed by time - sharing points a , b , and c. in effect , we are evaluating how much gain or loss results from completely replacing the traditional power control scheme ( see eq .( [ ratesequations ] ) ) with the time - sharing scheme between the corner points ( see eq .( [ crystallizedrates ] ) ) .for this purpose we consider the symmetric channel , where , and we increase the interference to vary the signal to interference ratio from to .the value of the area bounded by the power control potential lines is plotted in fig.[fig : rrareas ] together with the value of the area bounded by the time - sharing scheme through the point b ( formed by the time - sharing lines a - b and b - c ) .in addition , for reference , the area confined by the time - sharing line a - c is plotted , which does not depend on the sir . for weak interference , or equivalently noise - limited regime , point bis used in constructing the crystallized region .as the interference increases beyond a certain threshold level , time - sharing through point b becomes suboptimal , and time - sharing a - c becomes optimal .the exact switching point from power control to time - sharing has been found in . in fig .[ fig : rrareas ] , this happens at the intersection of the blue line ( with circle markers ) and the a - c dotted line .as indicated in fig .[ fig : rrareas ] , there is no significant loss in the rates region area if time - sharing is used universally instead of traditional power control , in fact in some cases time - sharing offers considerable gain .specifically , whenever the potential lines exhibit concavity , time - sharing loses to power control ; whenever the potential lines exhibit convexity , time - sharing gains over power control .different values of also lead to the same conclusion . in fig .[ fig : rrareasperct ] the percentage of the rates region gain ( or loss ) in using the time - sharing scheme ( through point b ) over the power control scheme is plotted for the same symmetric channel examined in fig .[ fig : rrareas ]. the loss does not exceed , and the time - sharing strategy is therefore quite attractive . for illustration purposes , note that the x - axis in fig .[ fig : rrareasperct ] was chosen to span the interference range of to ; whereas in fig . [ fig : rrareas ] the x - axis interference range was from to .if we were to plot the x - axis in fig . [ fig : rrareasperct ] up to instead of the , the percentage gain would have reached on the y - axis up to .note that for high interference time - sharing through point b is suboptimal and time - sharing a - c is optimal , so the gain over power control is even larger .the crystallized rates region offers a good alternative to form the rates region of the interference channel with marginal loss , and sometimes significant gain ( especially for interference - limited regimes ) .therefore the problem revolves around finding the convex hull over the set of polygons connecting the corner points .one technique explored to achieve the convex hull is through the concept of correlated equilibrium in game theory .every user has a transmit strategy of either or . is the utility of user .nash equilibrium is a well - known concept to analyze the outcome of a game which states that in the equilibrium every user will select a utility - maximizing strategy given the strategies of every other user .nash equilibrium achieving strategy of user is defined as : where is any possible strategy of user , is the strategy vector of all other users except user , and is the strategy space . in other words , given the other users actions , no user can increase its utility alone by changing its own action .next the concept of the correlated equilibrium is studied .it is more general than the nash equilibrium and it was first proposed in .the idea is that a strategy profile is chosen according to the joint distribution instead of the marginal distribution of users strategies .when converging to the recommended strategy , it is to the users best interests to conform to this strategy .the distribution is called the correlated equilibrium , which is defined as : a probability distribution is a correlated equilibrium of a game , if and only if , for all , , and , \geq0 , \forall \alpha_i\in \omega_i.\ ] ] denotes the strategy space of all the users other than user . as every user , , has a possible or strategy choice, then the cardinality of is .therefore the summation in eq .( [ eqn : ce1 ] ) have summation terms .the summation over generates the marginal expectation .the inequality ( [ eqn : ce1 ] ) means that when the recommendation to user is to choose action , then choosing action instead of can not result in a higher expected payoff to user .it is worth to point out that the probability distribution is a joint point mass function ( pmf ) of the different combinations of the users strategies .therefore , is the joint pmf of the resulting _ system _ strategy points .discounting the trivial system strategy of all the users transmitting at , there exist system strategy points that we wish to find their pmfs .revisiting subsection [ subsec : thetas ] , the point mass functions that we want to find are the system time - sharing coefficients , .we can index those pmfs to the corresponding in any bijective one - to - one mapping .index can denote the base- representation of the binary users strategies ( starting with user s binary action as the least significant bit ) .for example , let denotes that user transmits with , and denotes that user is silent with . in subsection[ subsec : thetas ] , was mapped to user transmitting , equivalently ; where we defined explicitly as the point mass function of the point . and similarly was mapped to both users transmitting , .morever , by definition , , and as discussed in corollary i , the solution point possesses at most nonzero pmfs in the joint distribution .the correlated equilibriums set is nonempty , closed and convex in every finite game .in fact , every nash equilibrium and mixed ( i.e. time - sharing ) strategy of nash equilibriums are within the correlated equilibrium set , and the nash equilibrium correspond to the special case where is a product of each individual user s probability for different actions , i.e. , the play of the different users is independent .there are two major challenges to implement correlated equilibrium for rate optimization over the interference channel . first , to ensure the system converges to the desired point ( such as time - sharing between a - c instead of going through point b in fig .[ fig:2drateregion ] ( d ) ) . as an example, we considered an auction utility function from mechanism design .second , to achieve the equilibrium , a distributive solution is desirable , where we propose the self - learning regret matching algorithm .one important mechanism design is the vickrey - clarke - groves ( vcg ) auction which imposes cost to resolve the conflicts between users .using the basic idea of vcg , where we want to maximize , the user utility is designed to be the rate minus a payment cost function as the payment cost of user is expressed as the performance loss of all other users due to the inclusion of user , explicitly : if is for user , it is equivalent to user being absent , consequently the cost whenever . for the case , focusing on when , hence : a ) if , then and ; b ) if , then : follows by symmetry . as a result , the vcg utilities for the channel are summarized in table [ table : modified chick ] , where notice that each user pays the cost because of its involvementthis cost function can be calculated and exchanged before transmission with little signalling overhead .finally , we exhibit the regret - matching algorithm to learn in a distributive fashion how to achieve the correlated equilibrium set in solving the vcg auction .the algorithm is named regret - matching ( no - regret ) algorithm , because the stationary solution of the learning algorithm exhibits no regret and the probability to take an action is proportional to the `` regrets '' for not having played the other actions .specifically , for user there are two distinct binary actions and at every time ( where , and ) .the regret of user at time for playing action instead of the other action is where .\vspace{-1mm}\ ] ] here is the utility at time and is other users actions . has the interpretation of average payoff that user would have obtained if it had played action every time in the past instead of choosing .the expression can be viewed as a measure of the average regret .similarly , represents the average regret if the alternative action has been taken . recalling the discussion in subsection [ subsec : ceincrys ] about the mapping notation we adopted between the point mass functions and the system time - sharing coefficients ( ), then we want to find the point mass function , , and .as discussed in subsection [ subsec : thetas ] there exist pmfs for the case . for the trivial case of the origin point , .we are left to obtain , , and . specifically to the case , this simplifies further to finding only _ two _ variables .denoting , and , then can be deduced as .the details of the regret - matching algorithm is shown in table [ table : regretmatching ] .the probability is a linear function of the regret , see eq .( [ eq : regret_finding_p ] ) .the algorithm has a complexity of . by using the theorem in ,if every user plays according to the learning algorithm in table [ table : regretmatching ] , the adaptive learning algorithm has the property that the probability distribution found converges on the set of correlated equilibrium .it has been shown that the set of correlated equilibrium is nonempty , closed and convex .therefore , by using the algorithm in table [ table : regretmatching ] , we can guarantee that the algorithm converges to the set of ce as .to demonstrate the proposed scheme , we setup a interference channel where . in fig .[ fig:2_noise ] , we show the crystallized rates region for the noise - limited regime with , and .the learning algorithm converges close to the nash equilibrium , which means that both users transmit with maximum power all the time .this corresponds to the case in fig .[ fig:2drateregion ] ( a ) . in fig .[ fig:2_user2 ] , we show the type ii time - sharing case with , and .the algorithm converges to and , which means the probability that user 2 transmits alone is , and the probability that both users transmit with full power is .this corresponds to the case in fig .[ fig:2drateregion ] ( c ) . finally , in fig .[ fig:2_inter ] , we show the interference - limited regime with as well as seven different instances of and . first , the nash equilibriums exhibit much poorer performance than the tdma time - sharing lines .the proposed learning algorithm converges to a point on the tdma time - sharing lines , this corresponds to the case in fig .[ fig:2drateregion ] ( d ) .moreover , the learning algorithm favors the weaker user . in fig .[ fig:2_sharing ] , we show the interference - limited case with , and . due to the symmetry ,the learning algorithm achieves probabilities of , which means the two users conduct equal time - sharing over the channel , where each transmits solely at full power while the other is silent ; and such two transmission states happen equally 50% of the time each .this corresponds to the a - c time - sharing case in fig .[ fig:2drateregion ] ( d ) .user case ] ( c ) ) ]treating the interference as noise , the paper proposes a novel approach to the rates region in the interference channel , composed by the time - sharing convex hull of corner points achieved through on / off binary power control .the resulting rates region is denoted crystallized rates region .it then applies the concept of correlated equilibrium from game theory to form the convex hull of the crystallized region .an example in applying these concepts for the case , the paper considered a mechanism design approach to design the vickrey - clarke - groves auction utility function .the regret - matching algorithm is used to converge to the solution point on the correlated equilibrium set , to which subsequently simulation was presented .r. etkin , d. tse and h. wang , gaussian interference channel capacity to within one bit " , _ arxiv preprint cs.it/0702045 , 2007 - arxiv.org_. [ online ] http://www.citebase.org/abstract?id=oai:arxiv.org:cs/0702045 m. charafeddine , a. sezgin , and a. paulraj , rates region frontiers for interference channel with interference as noise " , _ in proc . of annual allerton conference on communication , control , and computing , allerton , il , sep .2007_. [ online ] http://www.csl.uiuc.edu/allerton/archives/allerton07/pdfs/papers/0048.pdf e. jorswieck and e. larsson , the miso interference channel from a game - theoretic perspective : a combination of selfishness and altruism achieves pareto optimality " , in proc . of _ icassp 2008_ , invited .d. p. palomar , j. m. cioffi , and m. a. lagunas , uniform power allocation in mimo channels : a game - theoretic approach " , _ ieee transactions on information theory _ , vol .49 , no . 7 , p.p . 1707 - 1727 , jul .z. han , z. ji , and k. j. r. liu , non - cooperative resource competition game by virtual referee in multi - cell ofdma networks " , _ ieee journal on selected areas in communications , special issue on non - cooperative behavior in networking _ , vol.53 ,no.10 , p.p.1079 - 1090 , aug .2007 .z. han , c. pandana , and k. j. r. liu , distributive opportunistic spectrum access for cognitive radio using correlated equilibrium and no - regret learning " , in proceedings of _ ieee wireless communications and networking conference _, hong kong , china , march 2007 .
|
treating the interference as noise in the interference channel , the paper describes a novel approach to the rates region , composed by the time - sharing convex hull of corner points achieved through on / off binary power control . the resulting rates region is denoted _ crystallized _ rates region . by treating the interference as noise , the rates region frontiers has been found in the literature to be the convex hull of hyper - surfaces . the rates region bounded by these hyper - surfaces is not necessarily convex , and thereby a convex hull operation is imposed through the strategy of time - sharing . this paper simplifies this rates region in the space by having only an on / off binary power control . this consequently leads to corner points situated within the rates region . a time - sharing convex hull is imposed onto those corner points , forming the crystallized rates region . the paper focuses on game theoretic concepts to achieve that crystallized convex hull via correlated equilibrium . in game theory , the correlated equilibrium set is convex , and it consists of the time - sharing mixed strategies of the nash equilibriums . in addition , the paper considers a mechanism design approach to carefully design a utility function , particularly the vickrey - clarke - groves auction utility , where the solution point is situated on the correlated equilibrium set . finally , the paper proposes a self learning algorithm , namely the regret - matching algorithm , that converges to the solution point on the correlated equilibrium set in a distributed fashion . = -0.6 in -0.45 in = 7.2 in = 9.7 in
|
social norms are the basis of a community . they are often adopted and respected even if in contrast with an agent s immediate advantage , or , alternatively , even if they are costly with respect to a naive behavior .indeed , the social pressure towards a widespread social norm is sometimes more powerful than a norm imposed by punishments .it is well known that the establishment of social norms is a difficult task and their imposition is not always fulfilled .this problem has been affronted by axelrod in a game - theoretic formulation , as the foundation of the cooperation and of the society itself .axelrod s idea is that of a repeated game .although in a one - shot game it is always profitable to win not following any norm , in a repeated game there might be several reasons for cooperation , the most common ones are direct reciprocity and reputation . in all these games ,the crucial parameters are the cost of cooperation with respect to defeat , and the expected number of re - encounters with one s opponent or the probability that one s behavior will become public .one can assume that these aspects are related to the size of the local community with which one interacts and the fraction of people in this community that share the acceptance of the social norm .indeed , the behavior of a spatial social game is strongly influenced by the network structure . in the presence of a social norm, people can manifest a conformist or a non conformist or contrarian attitude , characterized by the propensity to agree or disagree with the average opinion in their neighborhood .contrarian agents were first discussed in the field of finance and later in opinion formation models .contrarian behavior may have an advantage in financial investment .financial contrarians look for mispriced investments , buying those that appear to be undervalued by the market and selling those that are overpriced . in opinion formation models , contrarians gather the average opinion of their neighbors and choose the opposite one .reasonable contrarians do not violate social norms , _i.e. _ , they agree with conformists if the majority of neighbors is above a certain threshold . models of social dynamics have been studied extensively . . in this paperwe model the dynamics of a homogeneous community with different degrees of reasonable contrarianism .one of the main motivations for this study is that of exploring the possible behavior of autonomous agents employed in algorithmic trading in an electronic market .virtually all markets are now electronic and the speed of transaction require the use of automatic agents ( algorithmic trading ) . our study can be consider as an exploration of possible collective effects in a homogeneous automatic market .we consider a simplified cellular automaton model .each agent can have one of two opinions at time , and we study the parallel evolution of such agents , which can be seen also as a spin system .a society of conformists can be modeled as a ferromagnet and one of contrarians as an antiferromagnet .each agent changes his opinion according to the local social pressure or what is the same , the average opinion of his neighbors . in spinlanguage , social norms can be represented as plaquette terms since they are non additive and important when the social pressure is above or below given thresholds .the model , presented in sec .[ sec : model ] , simulates a society of reasonable contrarians that can express one of two opinions , 0 and 1 . at each time step, each agent changes its opinion according to a transition probability that takes into account the average opinion of his neighbors , that is , the local social pressure and the adherence to social norms .the neighborhoods are fixed in time .the transition probability depends on a parameter , analogous to the spin coupling in the ising model , which is positive for a society of conformists ( ferromagnet ) and negative for one of contrarians ( antiferromagnet ) .for a one dimensional society where the neighborhood of each agent includes its first nearest neighbors , is the connectivity , the average opinion fluctuates around the value 1/2 , regardless of the values of the parameters of the transition probability .simulations of the one - dimensional version of the model show irregular fluctuations at the microscopic level , with short range correlations .the mean field approximation of the model for the average opinion is a discrete map which exhibits bifurcation diagrams as the parameters and change , as discussed in sec . [sec : meanfield ] .the diagrams show a period doubling route towards chaos . in sec .[ sec : smallworld ] we discuss the model on watts - strogatz networks that exhibit the small - world effect .we find a bifurcation diagram as the fraction of rewired links changes .since the opinion of agents change probabilistically , we speak of probabilistic bifurcation diagrams . in sec .[ sec : networks ] , the reasonable nonconformist opinion model is extended to scale - free networks .again , we observe a probabilistic bifurcation diagram , similar to the previous ones , by varying the coupling .we are able to obtain a good mapping of the scale - free parameters onto the mean field approximation with fixed connectivity . in order to compare the deterministic and probabilistic bifurcation diagrams, we exploit the entropy of the average opinion . in the deterministic case ,large values of correspond to positive values of the lyapunov exponent . in secs .[ sec : meanfield ] , [ sec : smallworld ] , and [ sec : networks ] we show that can be used to characterize numerically order and disorder in deterministic and probabilistic bifurcation diagrams .finally we present some conclusions .each of the agents has opinion at the discrete time with and .the state of the society is . in the context of cellular automata and discrete magnetic systems ,the state at site is and the spin at site is respectively . the average opinion is given by ( color online ) the transition probability given by eq . with , , , and .] the opinion of agent evolves in time according to the opinions of his neighbors , identified by an adjacency matrix with components . if agent is a neighbor of agent , , otherwise .the adjacency matrix defines the network of interactions and is considered fixed in time .the connectivity of agent is the size of his neighborhood , the average local opinion or social pressure , is defined by the opinion of agent changes in time according to the transition probability that agent will hold the opinion at time given the local opinion at time .this transition probability , shown in fig .[ fig : tau - mf ] , is given by with .the quantity denotes the threshold for the social norm , and the probability of being reasonable . with or , and are absorbing states . in the following we set and if not otherwise stated .the results are qualitatively independent of and as long as they are small and positive .the transition probability has the symmetry with and , agent will likely agree with his neighbors , a society of conformists . with and , agent will likely disagree with his neighbors , a contrarian society . for or agent will likely agree ( if is small ) with the majority of his neighbors , regardless of the value of .we might also add an external field , modeling news and broadcasting media , but in this study we always keep .we are thus modeling a completely uniform society , _i.e. _ , we assume that the agent variations in the response to stimuli are quite small . moreover , we do not include any memory effect , so that the dynamics is completely markovian . in the language of spin systems, is the transition probability of the heat bath dynamics of a parallel ising model with ferromagnetic , , or antiferromagnetic , , interactions .the behavior of the transition probability in the regions and may be seen as due to a non - linear plaquette term that modifies the ferro / antiferro interaction .if we set and , the system becomes deterministic ( in magnetic terms , this is the limit of zero temperature ) . in one dimension , with , and this model exhibits a nontrivial phase diagram , with two directed - percolation transition lines that meet a first - order transition line in a critical point , belonging to the parity conservation universality class . in this case, we have the stability of the two absorbing states for ( conformist society or ordered phase ) , while for ( anti - ferro or contrarian ) the absorbing states are unstable and a new , disordered active phase is observed .the model has been studied in the one - dimensional case with larger neighborhood . in this caseone observes again the transition from an ordered to an active , microscopically disordered phase , but with no coherent oscillations . indeed ,if the system enters a truly disordered configuration , then the local field is everywhere equal to and the transition probabilities become insensitive to and equal to , see eq . .[ cols="^,^ " , ] in the appendix we show that the model dynamics on scale - free networks is comparable to the mean field approximation of sec .[ sec : meanfield ] on a network with constant connectivity with in figs .[ fig : sfn - bif ] ( a ) and ( b ) we show the probabilistic bifurcation diagrams of the model on scale - free networks as a function of for two values of and in figs . [fig : sfn - bif ] ( c ) and ( d ) we show the bifurcation diagram of the mean field approximation , eq .( [ eq : mf ] ) , for the corresponding values of according to eq .( [ eq : km ] ) .we find a qualitative agreement between these bifurcation diagrams . in figs .[ fig : sfn - bif ] ( e ) and ( f ) we show the entropy of of the mean field approximation and of the simulations on scale - free networks .we find a reasonable agreement when with with the value of for which the entropy of the mean field approximation crosses the line for the first time .thus , the entropy is a good way of comparing both dynamics when and are related according to eq .( [ eq : km ] ) . above ,both entropies are numerically similar , except where there are periodic windows in the mean field approximation , and this agreement is better for and .we studied a reasonable contrarian opinion model .the reasonableness condition forbids the presence of absorbing states . in the model ,this condition depends on two parameters that are held fixed .the model also depends on the connectivity which may vary among agents , and the coupling parameter .the neighborhood of each agent is defined by an adjacency matrix that can have fixed or variable connectivity ( fixed or power - law ) and a regular or stochastic character .the interesting observable is the average opinion at time .we computed the entropy of the stationary distribution of , after a transient . in the simplest case ,the neighborhood of each agent includes random sites . in this case , the mean field approximation for the time evolution of the average opinion exhibits , by changing , a period doubling bifurcation cascade towards chaos with an interspaced pitchfork bifurcation . a positive ( negative )lyapunov exponent corresponds to an entropy larger ( smaller ) than .thus , entropy is a good measure of chaos for this map , and can be also used in the simulations of the stochastic microscopic model .the bifurcation diagram of the mean field approximation as a function of shows periodic and chaotic regions , also with a pitchfork bifurcation .again , entropies larger than correspond to chaotic orbits .actual simulations on a one - dimensional lattice show incoherent local oscillations around . by rewiring at random a fraction of local connections ,the model presents a series of bifurcations induced by the small - world effect : the density exhibits a probabilistic bifurcation diagram that resembles that obtained by varying in the mean field approximation .these small - world induced bifurcations are consistent with the general trend , long - range connections induce mean field behavior .this is the first observation of this for a system exhibiting a _ chaotic _mean field behavior .indeed , the small - world effect makes the system more coherent ( with varying degree ) .we think that this observation may be useful since many theoretical studies of population behavior have been based on mean field assumptions ( differential equations ) , while actually one should rather consider agents , and therefore spatially - extended , microscopic simulations .the well - stirred assumption is often not sustainable from the experimental point of view .however , it may well be that there is a small fraction of long - range interactions ( or jumps ) , that might justify the small - world effect .the model on scale - free networks with a minimum connectivity shows a similar behavior to that of the mean field approximation of the model on a network with constant connectivity , eq .( [ eq : mf ] ) if with . in summary, we have found that , as usual , long - range rewiring leads to mean - field behavior , which can become chaotic by varying the coupling or the connectivity .similar scenarios are found in actual microscopic simulations , also by varying the long - range connectivity , and in scale free networks .this study can have applications to the investigation of collective phenomena in algorithmic trading .interesting discussions with jorge carneiro and ricardo lima are acknowledged .this work was partially supported by recognition project ue grant n 257756 and project papiit - dgapa - unam in109213 .the similarity between the bifurcations diagrams in figs .[ fig : mfb ] ( a ) and [ fig : mfb - k ] ( a ) , which comes out from the similarities of the mean - field maps when changing and ( fig .[ fig : mfeq ] ) , can be explained by using a continuous approximation for the connectivity . by using stirling s approximation for the binomial coefficients in eq . , for intermediate values of , we obtain .\ ] ] in this approximation , eq . can be written as \!\tau(x)\ ] ] with the continuous approximation of .this expression is just a gaussian convolution of , _i.e. _ , a smoothing of the transition probability , as can be seen by comparing fig .[ fig : tau - mf ] with fig .[ fig : mfeq ] .this smoothing has the effect of reducing the slope of the curve in a way similar to changing ( but is depends also on ) , and this explains the similarities between the bifurcation diagrams in fig .[ fig : mfb ] ( a ) and [ fig : mfb - k ] ( a ) .for instance , fig .[ fig : mfb ] ( a ) is obtained for , a value that in fig .[ fig : mfb - k ] ( a ) corresponds to a chaotic strip just after a window with six branches .a similar window can be observed also in fig .[ fig : mfb ] ( a ) by increasing from the value of fig .[ fig : mfb - k ] ( a ) . this approximation can be used also to find the `` effective '' connectivity of the model on a scale - free network .the mean field approximation for a non - homogeneous network can be written as with the probability that the opinion of an agent with connectivity at time is one , and the probability that the opinion of an agent with connectivity at time is one .the sum on the _ r.h.s _ is taken over the opinions of the agents in the neighborhood , and over their connectivities .the variables take the values zero or one , while ranges from to .the quantity is the probability that the agent with connectivity is connected to another one of connectivity and . since this network is symmetric , ( detailed balance ) .it is also non - assortative , so does not depend on and we can write . by summing the detailed balance condition over get .therefore , eq . becomes in the previous equation , is either zero or one , so that only one between and is different from zero .assuming that depends only slightly on in eq . , we approximate with and we get with . in order to close the equation we average over the probability distribution . by using the approximation of eq ., we get , \end{split}\ ] ] where . for scale - free networks the connectivity distribution is given by . then a where and is the incomplete upper gamma function extended to negative values of ( the function is single - valued and analytic for all values of and ) .the function is well approximated by , as shown in fig .[ fig : incomplete ] . therefore we can write .\ ] ] this last expression has the form of eq . , with an effective connectivity .since the argument of is , the substituted results to be a gaussian , centered around .the important values of lie between 0 and 4 , depending on the value of . in this interval ,the best approximation of ( the minimum of ) is around .therefore is definitively different from the average connectivity .( color online ) first 100 steps of the return map for the density of the model on a scale free network with , , .the first iterate is marked by the arrow .the continuous curve is the graph of eq .( [ mfk ] ) with ] as usual , the mean field predictions are only approximately followed by actual simulations . in fig .[ fig : rtn - sfn ] we show the first 100 steps of the return map of the density for .the scale - free network is fixed , with , and the initial opinions of the agents are chosen at random with .the arrow marks the first point , that follows the mean field prediction with ( ) , as in fig .[ fig : rtn - sfn ] , but then , due to correlations , the return maps follows a different curve .this implies that nontrivial correlations establish also in scale - free networks .b. derrida , in fundamental problem in statistical mechanics , edited by h. van beijeren ( elsevier , new york , 1990 ) , vol ., pp . 276 .f. bagnoli , n. boccara , and r. rechtman , phys .e * 63 * , 046116 ( 2001 ) .l. boltzmann , _ vorlesungen ber gastheorie _ , leipzig , j. a. barth , ; partt i , 1896 , part ii , 1898 .english translation by s. g. brush , _ lectures on gas theory _, university of california press , 1964 , chapter i , sec . 6 .
|
people are often divided into conformists and contrarians , the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority . in practice , however , the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion , which denotes a social norm . such reasonable contrarian behavior is often considered a mark of independent thought , and can be a useful strategy in financial markets . we present the opinion dynamics of a society of reasonable contrarian agents . the model is a cellular automaton of ising type , with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms . we introduce the entropy of the collective variable as a way of comparing deterministic ( mean - field ) and probabilistic ( simulations ) bifurcation diagrams . in the mean field approximation the model exhibits bifurcations and a chaotic phase , interpreted as coherent oscillations of the whole society . however , in a one - dimensional spatial arrangement one observes incoherent oscillations and a constant average . in simulations on watts - strogatz networks with a small - world effect the mean field behavior is recovered , with a bifurcation diagram that resembles the mean - field one , but using the rewiring probability as the control parameter . similar bifurcation diagrams are found for scale free networks , and we are able to compute an effective connectivity for such networks .
|
international linear collider ( ilc ) is a next generation linear collider whose construction plan is progressing to search for new physics .international large detector ( ild ) is one of the detector concept for ilc .particle flow algorithm is the key analysis method used in ilc . in pfa ,particles in the jets are separated and the optimal detector is used to measure individual particles .the momenta of charged particles are measured by the tracker , energies of photons are measured by the electromagnetic calorimeter , and energies of neutral hadrons are measured by the hadron calorimeter . to improve the performance of pfa , it is necessary to distinguish the particles in the jet .figure [ fig : bbbb ] shows ecal of ild .the electromagnetic calorimeter ( ecal ) of ild , which is a sampling calorimeter composed of tungsten absorber layers and segmented sensor layers . in the ecal ,it is necessary to separate the particles in an electromagnetic shower one by one in order to satisfy the pfa requirement , so we should improve the position resolution of the detection layer of ecal as much as possible . for this purpose ,highly segmented silicon pad sensor are employed as the reference design of ild ecal .we are investigating position sensitive silicon detector ( psd ) for an alternative design to further improve the position resolution of photons .figure [ fig : cross ] shows the schematic of the cross section of silicon sensors . electrons and holes are generated along the path of a charged particle . in conventional silicon pads, the charge goes through p pad to electrodes . in psds ,the charge reaches a p surface at first , then running through the resistive p surface to the electrodes .this devides the charges by resistive division , so we can calculate the incident position from the function of the charge recorded at each electrode .this mechanism gives higher position resolution without further dividing electrodes .psd sensors are expected to enhance the function of ild ecal by improving position resolution of photons .psds can be used at the innermost layers of ecal where hit density is much smaller than the shower maximum region .we can employ larger cell sizes for those layers to avoid increasing number of readout channels , considering psd needs four electrodes on one cell .we expect less than 1 mm position resolution with psd in 1 cells , which is significantly less than the resolution with conventional pads of mm cells .there are advantages of having better position resolution in ecal .first , the reconstruction of from two photons can be improved .the improved position resolution can be used for the kinematic fit of reconstruction , which leads to improve the jet energy resolution and reconstruction of heavy quarks ( / tagging ) .psds can also be used for strip tracking detectors in order to reduce ghost hits by obtaining hit positions roughly along the strips .effects on physics performance should be confirmed with monte - carlo simulation study .the size of psd sensors is 7.0 times 7.0 mm .thickness is 320 m .no guard rings are implemented on the edges of the sensors .figure [ fig : psdsensor ] shows a psd sensor made by hamamatsu photonics .this sensor has electrodes at the four corners .we have two types of psd sensors .the difference between the two types can be seen by a microscope .figure [ fig : mesh ] and [ fig : nonmesh ] are the magnified views of the black areas of fig.3 for the two types of sensors .this mesh increases the resistivity of the p layer . with the larger resistivity, it is expected to reduce the noise and the position distortion .flat surface of the p layer is seen on figure [ fig : nonmesh ] , in contrast to figure [ fig : mesh ] , which shows meshed p surface .0.4 0.4 figure [ fig : abcdddd ] and [ fig : abcdd ] are the result of capacitance measurement characteristics of psd with mesh and psd without mesh , respectively .they are fully depleted at about 60 v and 50 v , respectively .0.4 0.4 figure [ fig : sensorbox ] is a printed circuit board ( pcb ) and a holder for four sensors .this is called sensor box " below . in this picture twoof the four sensor places are filled with a meshed and a non - meshed sensor .the box was fixed in a two axis automatic stage in a dark chamber as shown in figure [ fig : xystage ] and reverse bias of 100 v was applied to the sensors .[ fig : abcd ]laser photons were injected to the psd sensors to test the position reconstruction .the specifications of the laser are shown in table [ tab : laser ] ..the specifications of the laser and optics [ cols="^,^",options="header " , ] the pcb on the sensors has cut on the main part of psd sensors to pass the laser photons through it .since the photon energy is slightly higher than the band gap energy of silicon , one optical photon creates one electron - hole pair in the silicon , and imitate signal at the well defined position .the movable stage below the sensor box was used to control the injection position .figure [ fig : ffff ] shows a schematic diagram of data acquisition system ( daq ) .signal from each electrode was amplified by a preamplifier and a shaper , and delivered to a peak - hold adc module on a camac system .the gate signal of the adc was applied from the laser injection trigger .we accumulate 8000 pulses at one point , and average the adc counts over the pulses . to calculate the position from signals , we use the following formulae , where and are reconstructed position along and axis , and ch5 - 8 stand for adc counts after the pedestal subtraction , as shown in fig . [fig : daxis ] .figure [ fig : realmesh ] shows real injection positions , obtained from the positions of the two - axis stage .the red points are the point with sufficiently strong signals of 1000 or more adc counts on all channels .a black cross mark indicates that the adc output value of one or more channels is smaller than 1000 so that the signal is too weak to be reconstructed .this plot should reflect the shape of the open cut of the sensor box if the sensor is fully active .figure [ fig : recmesh ] shows reconstructed positions by eq.1 . in this figure , only points with sufficient signal strength are plotted .distortion from the original grid , which is expected behavior of psds , is seen .0.4 0.4 figure [ fig : meshcolx ] and [ fig : meshcoly ] are the correlation between the actual incident position and the reconstructed position of the x and y coordinates , respectively. a good correlation between the two positions with some non - uniformity is obtained .0.4 0.4 the same measurement was performed on the non - meshed psd , shown in figure [ fig : realnonmesh ] .shapes like open cut of sensor box are seen .however , strong signals can not be obtained at two places surrounded by an ellipse , and the reason is under investigation . 0.4 0.4 figure [ fig : recmesh ] shows reconstructed positions . compared with the meshed psd ,the reconstructed positions are concentrated in the almost same value .this shows that the non - meshed psds are not functional to obtain the incident positions .the high resistivity of p layer should be essential for this type of psds .as shown in figure [ fig : rstage ] , a rubber sheet of 1 mm thickness was put on the sensor box , and was put on the rubber sheet .that was done with a conventional pixel type silicon sensor and psd with mesh , respectively , and we tried to capture the signal with the radiation source .figure [ fig : gggg ] shows a schematic diagram of daq to capture the signal with the radiation source .adc was self - triggered .signals are inverted in the shaper amplifier to make the trigger . for the psd sensor , measurementwas carried out for 2 hours in a state with and without a radiation source . during this time , the sum of the signals from four electrodes after passing shaper amplifier from the four electrodes exceeded 400 mv was treated as an event . in the pixel type silicon sensor , 4 pixels out of 9 pixels were used in a state with a radiation source .when the signal of 1 pixel exceeded 100 mv , it was treated as an event . at this time , the output values of the other three pixels are recorded as a pedestal .figure [ fig : baby44 ] is the distribution of the adc output in one pixel of a conventional pixel type silicon sensor .it seems that the signal from the beta source on the right side of the plot and the pedestal on the left seem to be separated well , but it seems to be seen separately by the trigger .figure [ fig : aaaa ] is the distribution of the sum of the adc outputs from the four electrodes of psd with and without radiation source .this shows that the signal comes more frequently with radiation source than without , however the difference on the distribution is not clear , mainly due to a noise coherent to all channels .we plan to reduce this system noise in the future and try again to measure the radiation source .psd is a silicon device which can derive the incident position of a particle by resistive devision of the charge to electrodes at the p surface .it is expected psds at the innermost layers of ecal to improve the position resolution of photons .our first sample shows reasonable reconstruction of incident position of laser photons with some distortion .meshed p surface gives better result .studies on the noise is ongoing .the silicon pads measured in this study have no gain .silicon sensors with avalanche gain are recently developed and is expected to obtain position resolution less than 100 .we plan to reduce electronic noise for psd measurement , create psd sensors with avalanche gain , and confirm effects on physics performance with monte - carlo simulation study .we appreciate that j - parc muon g-2/edm collaboration supported in production of the psd sensors , and hamamatsu photonics suggested meshed psd sensor .a. banu , y .li , m. mccleskey , m. bullough , s. walsh , c.a .gagliardi , l. trache , r.e .tribble , c. wilburn , performance evaluation of position - sensitive silicon detectors with four - corner readout .instrum . and meth a593(3 ) , ( 2008 ) 399 - 406
|
we are developing position sensitive silicon detectors ( psd ) which have an electrode at each of four corners so that the incident position of a charged particle can be obtained using signals from the electrodes . it is expected that the position resolution the electromagnetic calorimeter ( ecal ) of the ild detector will be improved by introducing psd into the detection layers . in this study , we irradiated collimated laser beams to the surface of the psd , varying the incident position . we found that the incident position can be well reconstructed from the signals if high resistance is implemented in the p+ layer . we also tried to observe the signal of particles by placing a radiative source on the psd sensor .
|
compressive sensing ( cs ) is proposed as a novel technique in the field of signal processing . based on the sparsity of signals in some typical domains ,this method takes global measurements instead of samples in signal acquisition .the theory of cs confirms that the measurements required for recovery are far fewer than conventional signal acquisition technique . with the advantages of sampling below nyquist rate and little loss in reconstruction quality, cs can be widely applied in the regions such as source coding , medical imaging , pattern recognition , and wireless communication .suppose that an -dimensional vector is a sparse signal with sparsity , which means that only entries of are nonzero among all elements .an measurement matrix with is applied to take global measurements of .consequently an vector is obtained and the information of -dimensional unknown signal is reduced to the -dimensional measurement vector . exploiting the sparse property of , the original signal can be reconstructed through and .the procedure of cs mainly includes two stages : signal measurement and signal reconstruction .the key issues are the design of measurement matrix and the algorithm of sparse signal reconstruction , respectively . on the signal reconstruction of cs ,a key problem is to derive the sparse solution , i.e. , the solution to the under - determined linear equation which has the minimal norm , however , ( [ l0 ] ) is a non - deterministic polynomial ( np ) hard problem .it is demonstrated that under certain conditions , ( [ l0 ] ) has the same solution as the relaxed problem ( [ l1 ] ) is a convex problem and can be solved through convex optimization . in non - ideal scenarios ,the measurement vector is inaccurate with noise perturbation and ( [ yax ] ) never satisfies exactly .consequently , ( [ l1 ] ) is modified to where is a positive number representing the energy of noise .many algorithms have been proposed to recover the sparse signal from and .these algorithms can be classified into several main categories , including greedy pursuit , optimization algorithms , iterative thresholding algorithms and other algorithms .the greedy pursuit algorithms always choose the locally optimal approximation to the sparse solution iteratively in each step .the computation complexity is low but more measurements are needed for reconstruction .typical algorithms include matching pursuit ( mp ) , orthogonal matching pursuit ( omp ) , stage - wise omp ( stomp ) , regularized omp ( romp ) , compressive sampling mp ( cosamp ) , subspace pursuit ( sp ) , and iterative hard thresholding ( iht ) .optimization algorithms solve convex or non - convex problems and can be further divided into convex optimization and non - convex optimization .convex optimization methods have the properties of fewer measurements demanded , higher computation complexity , and more theoretical support in mathematics .convex optimization algorithms include primal - dual interior method for convex objectives ( pdco ) , least square qr ( lsqr ) , large - scale -regularized least squares ( - ) , least angle regression ( lars ) , gradient projection for sparse reconstruction ( gpsr ) , sparse reconstruction by separable approximation ( sparsa ) , spectral projected - gradient ( spgl1 ) , nesterov algorithm ( nesta ) and constrained split augmented lagrangian shrinkage algorithm ( c - salsa ) .non - convex optimization methods solve the problem of optimization by minimizing norm with , which is not convex .this category of algorithms demands fewer measurements than convex optimization methods .however , the non - convex property may lead to converging towards the local extremum which is not the desired solution . moreover , these methods have higher computation complexity .typical non - convex optimization methods are focal underdetermined system solver ( focuss ) , iteratively reweighted least square ( irls ) and analysis - based sparsity ( l0abs ) .a new kind of method , zero - point attracting projection ( zap ) , has been recently proposed to solve ( [ l0 ] ) or ( [ l1 ] ) .the projection of the zero - point attracting term is utilized to update the iterative solution in the solution space .compared with the other algorithms , zap has advantages of faster convergence rate , fewer measurements demanded , and a better performance against noise .however , zap is proposed with heuristic and experimental methodology and lacks a strict proof of convergence .though abundant computer simulations verify its performance , it is still essential to prove its convergence , provide the specific working condition , and analyze performances theoretically including the reconstruction precision , the convergence rate and the noise resistance .this paper aims to provide a comprehensive analysis for zap . specifically , it studies -zap , which uses the gradient of norm as the zero - point attracting term .-zap is non - convex and its convergence will be addressed in future work .the main contribution of this work is to prove the convergence of -zap in non - noisy scenario .our idea is summarized as follows .firstly , the distance between the iterative solution of -zap and the original sparse signal is defined to evaluate the convergence .then we prove that such distance will decrease in each iteration , as long as it is larger than a constant proportional to the step - size .therefore , it is proved that -zap is convergent to the original sparse signal under non - noisy case , which provides a theoretical foundation for the algorithm .lemma 1 is the crucial contribution of this work , which reveals the relationship between norm and norm in the solution space .another contribution is about the signal reconstruction with measurement noise .it is demonstrated that -zap can approach the original sparse signal to some extent under inaccurate measurements . in the noisy case , the recovery precision is linear with not only the step - size but also the energy of noise .other contributions include the discussions on some related topics .the convergence rate is estimated as an upper bound of iteration number .the constraint of initial value and its influence on convergence are provided .the convergence of -zap for -compressible signal is also discussed .experiment results are provided to verify the analysis . at the time of revising this paper , we are noticed of a similar algorithm called projected subgradient method , which leads to some related researches . though obtained from different frameworks, -zap shares the same recursion with the other .however , the two algorithms are not exactly the same .the attracting term of zap is not restricted to the subgradient of a objective function , and can be used to solve either a convex problem or a non - convex one , while only the subgradient of a convex function is allowed in the mentioned method .furthermore , the available analysis of the projected subgradient method studies the convergence of the objective function , while this work focuses on the properties of the iterative sequence , as derived from the significant lemma 1 .the theoretical analysis in this work may contribute to promoting the projected subgradient method .the remainder of this paper is organized as follows . in sectionii , some preliminary knowledge is introduced to prepare for the main theorems . the main contribution in non - noisy scenariois presented as theorem 4 in section iii , which proves the convergence of -zap .some related topics about theorem 4 are also discussed in section iii .section iv shows another main theorem in noisy scenario , and some discussions are also brought out .experiment results are shown in section v. the whole paper is concluded in section vi .in this subsection , restricted isometry property ( rip ) and coherence are introduced and then some theorems on ( [ l1 ] ) and ( [ l1_eps ] ) are presented , which will be helpful to the following content . is the submatrix by extracting the columns of matrix corresponding to the indices in set .the rip constant is defined as the smallest nonnegative quantity such that holds for all subsets with and vectors . if the rip constant of matrix satisfies the condition where is the sparsity of , then the solution of ( [ l1 ] ) is unique and identical to the original signal . if the rip constant of matrix satisfies the condition then the solution of ( [ l1_eps ] ) obeys where is the original signal of sparsity and is a positive constant related to .rip determines the property of the measurement matrix .recent results on rip can be found in . the coherence of an matrix is defined as where is the column of and . if the sparsity of and the coherence of matrix satisfy the condition then the solution of ( [ l1_eps ] ) is unique .theorem 1 provides the sufficient condition on exact recovery of the original signal without any perturbation .it is also a loose sufficient condition of the unique solution of ( [ l1 ] ) .theorem 2 indicates that under the condition ( [ eq47 ] ) , the solution of ( [ l1_eps ] ) is not too far from the original signal , with a deviation proportional to the energy of measurement noise .theorem 3 provides a sufficient condition of the uniqueness of the solution of ( [ l1_eps ] ) . in zap algorithm ,the zero - point attracting term is used to update the iterative solution and then the updated iterative solution is projected to the solution space .the procedures of zap can be summarized as follows .input : .initialization : and .iteration : while stop condition is not satisfied \1 .zero - point attraction : \2 .projection : \3 .update the index : end while in the initialization and ( [ eq22 ] ) , denotes the pseudo - inverse of . in ( [ eq11 ] ) , is the zero - point attracting term , where is a function representing the sparse penalty of vector .positive parameter denotes the step - size in the step of zero - point attraction .zap was firstly proposed in with a specification of -norm constraint , termed -zap , in which the approximate norm is utilized as the function f(x ) .-zap belongs to the non - convex optimization methods and has an outstanding performance beyond conventional algorithms . in ,the penalty function is and its gradient is approximated as ^{\rm t}\ ] ] and the piecewise and non - convex zero - point attracting term further increases the difficulty to theoretically analyze the convergence of -zap . as another variation of zap ,-zap is analyzed in this work .the function is the norm of in the zero - point attracting term .since it is non - differentiable , the gradient of can be replaced by its sub - gradient .considering that the gradient of is when none of the components of are zero , ( [ eq11 ] ) can be specified as where the gradient is replaced by one of the sub - gradients .the sign function has the same size with and each entry of is the scalar sign function of the corresponding entry of .experiments show that though its performance is better than conventional algorithms , -zap behaves not as good as norm constraint variation .however , as a convex optimization method , -zap has advantages beyond non - convex methods , as mentioned in introduction .-zap is considered in this paper as the first attempt to analyze zap in theory . the steps ( [ eq62 ] ) and ( [ eq22 ] ) of -zap can be combined into the following recursion with the projection matrix notice that following ( [ eq5 ] ) , ( [ eq4 ] ) and the initialization , the sequence has the property which means all iterative solutions fall in the solution space .numerical simulations demonstrate that the sparse solution of under - determined linear equation can be calculated by -zap .in fact , the sequence calculated through ( [ eq5 ] ) is not strictly convergent . will fall into the neighborhood of after finite iterations , with radius proportional to step - size . with the increasing of iterations , approaches step by step at first . however , it vibrates in the neighborhood of when is close enough to . if the step - size decreases , the radius of neighborhood also decreases .consequently , one can get the approximation to the sparse solution at any precision by choosing appropriate step - size . in this work the convergence of -zapis proved .the main results are the following theorems in section iii and iv , corresponding to non - noisy scenario and noisy scenario , respectively .the main contribution is included in this section .a lemma is proposed in subsection a for preparing the main theorem in subsection b. then the condition of exact signal recovery by -zap is given in subsection c. several constants and variables in the proof of convergence are discussed in subsections d and e. in subsection f , an estimation on the convergence rate is given .the initial value of -zap is discussed in subsection g. [ lemma2 ] suppose that satisfies , with given and . is the unique solution of ( [ l1 ] ) .if is bounded by a positive constant , then there exists a uniform positive constant depending on and , such that holds for arbitrary satisfying .the outline of the proof is presented here while the details are included in appendix a. by defining equation ( [ eqinlemma2 ] ) is equivalent to the following inequality define the index set , then there exists a positive constant such that , when satisfies the above proposition means that and share the same sign for the entries indexed by .define sets and as consequently , for the separate cases of and , it is proved that has a positive lower bound , respectively . combining the two cases , lemma [ lemma2 ]is proved .[ theorem4 ] suppose that is the unique solution of ( [ l1 ] ) . and satisfy the recursion ( [ eq5 ] ) and is energy constrained by , where is a positive constant .then the iteration obeys when where are two constants with a parameter , and denotes the lower bound specified in lemma 1 .for a given under - determined constraint ( [ yax ] ) and the unique sparsest solution of ( [ l1 ] ) , theorem [ theorem4 ] demonstrates the convergence property and provides the convergence conditions of -zap . as long as the iterative result is far away from the sparse solution , the new result in next iteration affirmatively becomes closer than its predecessor .furthermore , the decrease in distance is a constant , which means will definitely get into the -neighborhood of in finite iterations . according to the definition of , can approach the sparse solution to any extent if the step - size is chosen small enough .therefore , -zap is convergent , i.e. , the iterative result can get close to the sparse solution at any precision . here is a tradeoff parameter which balances the estimated precision and convergence rate .the proof of theorem [ theorem4 ] goes in appendix b. using theorem 4 and conditions added , the convergence of -zap can be deduced , as the following corollary . under the condition ( [ eq23 ] ), -zap can recover the original signal at any precision if the step - size can be chosen small enough .firstly , it will be demonstrated that the condition of energy constraint in theorem 4 can always be satisfied .in fact , can be chosen greater than .if the energy constraint holds for index , the conditions of theorem 4 are satisfied and then holds naturally according to ( [ eq9 ] ) .consequently , it is readily accepted that the condition of energy constraint is satisfied for each index , with the utilization of theorem 4 in each step . combining the explanation after theorem 4 ,it is clear that the -zap is convergent to the solution of ( [ l1 ] ) at any precision as long as the step - size is chosen small enough . according to theorem 1 ,it is known that under the condition of ( [ eq23 ] ) , the solution of ( [ l1 ] ) is unique and identical to the original sparse signal. then corollary 1 is proved . according to theorem 4 and corollary 1, the sequence will surely get into the -neighborhood of .in fact , because of several inequalities used in the proof , is merely a theoretical radius with conservative estimation .the actual convergence may get into a even smaller neighborhood .the details will be discussed in subsection f. involved in ( [ eq25 ] ) of theorem 4 , constant is essential to the convergence of -zap .in fact , the key contribution of this work is to indicate the existence of this constant .however , one can merely obtain the existence of from the proof of lemma 1 , other than its exact value . because in the definition of ( [ eq6 ] ) is unknown ,it is difficult to give the exact value or formula of , even though it is actually determined by , , and .whereas , an upper bound is given with some information about , which leads to theorem 5 .according to ( [ eq25 ] ) , constant is inversely proportional to . with a small , the radius of convergent neighborhood is large and the convergence precision is worse .the maximum of is also involved in the definition of .according to the range of sign function , i.e. , there are choices of vector altogether . similar to , the extremum of determined by .the relationship between and extremum of is presented in theorem 5 .if is defined by ( [ eq6 ] ) , one has the following inequality the proof of theorem 5 is postponed to appendix c. according to the theorem , the minimum of restricts the value of , as leads to worse precision of -zap .hence , the measurement matrix should be chosen with relatively large to improve the performance of the mentioned algorithm .the mathematical meaning of is the projection of to the solution space of . for a particular instance , if there exists a sign vector , to whom the solution space is almost orthogonal , then the minimum of is rather small and the precision of convergence is bad .an additional explanation is that the solution space can not be strictly orthogonal to any sign vector , or else it will lead to a contradiction with the condition of ( [ eq23 ] ) , i.e. , the uniqueness of .a parameter is involved in theorem 4 .we will discuss the choice of and some related problems .first of all , it needs to be stressed that is just a parameter for the bound sequence in theoretical analysis , other than a parameter for actual iterations . according to the proof in appendix b , as long as is chosen satisfying the conditions of and theorem 4 holds and the distance between and decreases in the next iteration .however , considering the expression of ( [ eq26 ] ) , the decrease of by each iteration is different for various .there are two strategies to choose the parameter , a constant or a variable one . when is chosen as a constant , theorem 4 indicates that as long as the distance between and is larger than , the next iteration leads to a decrease at least a constant step of . when the parameter is variable , the decrease step of is also variable .the expressions show that and increase as the increase of .notice that must obey where the right inequality is necessary to satisfy ( [ eq99 ] ) , which ensures the convergence of the sequence . during the very beginning of recursions, is far from .consequently , satisfying ( [ eq54 ] ) can be larger , and lead to a faster convergence . however , as gets closer to by iterations , satisfying ( [ eq54 ] ) is definitely just a little larger than one .to be emphasized , the actual convergence of iterations can not speed up by choosing the parameter .the value of only impacts the sequence of which is a sequence bounding the actual sequence in the proof of convergence .theorem 4 tells little about the convergence rate .considering several inequalities utilized in the proof , the actual convergence is faster than that of the sequence in ( [ eq55 ] ) .it means that a lower bound of the convergence rate can be derived in theory .corresponding to the variable selection of , a sequence is put forward with properties where combining ( [ eq46 ] ) and ( [ eq45 ] ) , the iteration of obeys the distance between and with variable decreases the most for each step .therefore , has a faster convergence rate compared with sequences satisfying ( [ eq55 ] ) with other choices of .however , as a theoretical result , it still converges more slowly than the actual sequence .derived from lemma 2 , which gives a rough estimation , theorem 6 provides a much better lower bound of the convergence rate .supposing is the iterative sequence by -zap , it will take at most steps for to get into the -neighborhood from the -neighborhood of , where and must obey supposing is the iterative sequence by -zap , it will get into the -neighborhood of within at most steps . here , and have the same definitions with those in theorem 4 , and must obey the proofs of lemma 2 and theorem 6 are postponed to appendix d and e , respectively . in -zap ,the initial value is the least square solution of the under - determined equation , from theorem 4 and corollary 1 , one knows that if the initial value obeys , the iterative sequence is convergent .therefore , the restriction to the initial value is to be in the solution space , other than to be the least square solution .however , it is still a convenient way to initialize using the least square solution .the convergence of -zap in noisy scenario is analyzed in this section . the main theorem in noisy scenariois given in subsection a. in subsection b , the problem of signal recovery from inaccurate measurements is discussed .subsection c shows different choices of initial value and the impact on the quality of reconstruction .the reconstruction of -compressible signal by -zap is discussed in subsection d. considering the perturbation on measurement vector , theorem 7 is presented to analyze the convergence of -zap . similar to lemma 1 , lemma 3 is proposed at first corresponding to the noisy case .suppose that satisfies , with given and . is the unique solution of ( [ l1_eps ] ) . is bounded by a positive number .then there exists a positive number depending on , , , and , such that with the definition of ( [ eq1 ] ) , ( [ eqinlemma4 ] ) is equivalent to the following inequality following the proof of lemma 1 , it can be readily proved that lemma 3 is correct .notice that here is not in the null - space of , but a unit vector satisfying .the remaining procedures are similar .the details of the proof are omitted for short .supposing that is the unique solution of ( [ l1_eps ] ) , sequence satisfies the iterative formula ( [ eq5 ] ) with conditions and where is a positive constant .then the iteration obeys when where , and are defined by ( [ eq25 ] ) and ( [ eq26 ] ) , respectively . here is a parameter , is the positive lower bound in lemma 3 , and is the largest eigenvalue of matrix .the proof of theorem 7 goes in appendix f. theorem 7 indicates that under measurement perturbation with energy less than , the iterative sequence will get into the -neighborhood of . for the fixed original signal and measurement matrix , the precision of approaching depends on both the step - size and the noise energy bound .it means that can not get close to the solution at any precision by choosing small step - size , because the noise energy also controls a deviation component , .corollary 2 indicates the property of signal reconstruction with inaccurate measurements .suppose the original signal is , and the conditions of ( [ eq47 ] ) and ( [ eq49 ] ) are satisfied .there exist real numbers , such that -zap can be convergent to a -neighborhood of , i.e. , -zap can approach the original signal to some extent under inaccurate measurements .referring to the proof of corollary 1 , it can be readily accepted that the condition ( [ eq53 ] ) is always satisfied for any index .it is known from theorem 3 that ( [ l1_eps ] ) has a unique solution under the condition ( [ eq49 ] ) .consequently , according to theorem 7 , the sequence finally gets into the neighborhood of with the radius .theorem 2 shows that under the condition of ( [ eq47 ] ) , the solution of ( [ l1_eps ] ) is not far from the original signal , with the inequality combining theorem 7 , ( [ eq61 ] ) , and the triangle inequality , one sees that the sequence gets into the neighborhood of with the radius .denote and the conclusion of corollary 2 is drawn . among the assumptions of theorem 7 ,a condition of ( [ eq52 ] ) is assumed to be satisfied .considering the recursion ( [ eq5 ] ) , one readily sees that under the simple condition of where is not necessarily the least square solution of , it will suffice to get ( [ eq52 ] ) , which satisfies the condition of theorem 7 .if the initial value satisfies ( [ eq19 ] ) , by defining , one has inequality ( [ eq34 ] ) provides the upper bound of and it is used to prove theorem 7 .if the iterations begin with the least square solution of the perturbed measurement , it obeys and according to ( [ eq321 ] ) one has which means that ( [ eq34 ] ) can be modified to hence , the parameter can be reduced to a half throughout the proof of theorem 7 .therefore , if the initial value is chosen as the least square solution , the neighborhood of convergence will be smaller , i.e. , a better estimation can be reached .the original signal is not always absolutely sparse .the reconstruction of compressible signal is discussed here .signal is -compressible with magnitude if the components of decay as where is the largest absolute value among the components of , and is a number between and .supposing that is a best -sparse approximation to , the following inequalities hold , where and . for a -compressible signal , one has by proposition 3.5 in , the norm of can be estimated as combining ( [ eq101 ] ) , ( [ eq102 ] ) and ( [ eq103 ] ) , one has according to theorem 7 and corollary 2 , the reconstruction property of -compressible signal by -zap can be deduced as follows .supposing is -compressible signal and the conditions of ( [ eq47 ] ) and ( [ eq49 ] ) are satisfied , then the -zap sequence can approach with a deviation where and are the same with those in corollary 2 , and is the energy bound of observation noise . the non - noisy scenario for compressible signalcan be naturally obtained by setting to zero in corollary 3 .several experiments are conducted in this section .the performance of -zap and -zap are shown in subsection a , compared with several other algorithms for sparse recovery .the deviations of actual -zap sequence and bound sequences in the proof are illustrated in subsection b. in subsection c , experiment results demonstrate the impacts of the step - size and the noise level on the signal reconstruction via -zap .the performances of -zap and -zap are simulated , compared with other sparse recovery algorithms . in the experiments ,the matrix is generated with the entries independent and following a normal distribution with mean zero and variance .the support set of original signal is chosen randomly following uniform distribution .the nonzero entries follow a normal distribution with mean zero .finally the energy of the original signal is normalized . for parameters , ,the probability of exact reconstruction for various number of measurements is shown as fig .if the reconstruction snr is higher than a threshold of , the trial is regarded as exact reconstruction .the number of varies from to and each point in the experiment is repeated 200 times .the step - size of -zap is .the parameters of other algorithms are selected as recommended by respective authors .it can be seen that for any fixed from to , -zap and -zap have higher probability of reconstruction than other algorithms , which means zap algorithms demand fewer measurements in signal reconstructions .the experiment also indicates that the performance of -zap is better than -zap , as discussed in section ii ..,width=384 ] for parameters , , fig .[ fig2 ] illustrates the probability of exact reconstruction for various sparsity from to .all the algorithms are repeated 200 times for each value .the parameters of algorithms are the same as those in the previous experiment .-zap has the highest probability for fixed sparsity and -zap is the second beyond other conventional algorithms .the experiment indicates that zap algorithms can recover less sparse signals compared with other algorithms ..,width=384 ] the snr performance is illustrated in fig .[ fig3 ] with the measurement snr varying from to and 200 times repeated for each value .the noise is zero - mean white gaussian and added to the observed vector .the parameters are selected as , and .the parameters of algorithms have the same choice with previous experiments .the reconstruction snr and measurement snr are the signal - to - noise ratios of reconstructed signal and measurement signal , respectively .-zap outperforms other algorithms , while -zap is almost the same as others .the experiment indicates that -zap has a better performance against noise and -zap does not have visible defects compared with other algorithms ..,width=384 ] the experiments above demonstrate that -zap has a better performance compared with conventional algorithms .-zap demands fewer measurements and can recover signals with higher sparsity , with similar property against noise .the performance of -zap is better than -zap . according to theorem 4 ,the deviation from the actual iterative sequence to the sparse solution is bounded by the sequence satisfying ( [ eq55 ] ) . in theorem 4 ,a sequence with parameter is utilized to bound the actual sequence and proved to be convergent . as discussed in iii - e and f , the sequence defined in ( [ eq46 ] ) and ( [ eq45 ] ) withadaptive approaches the sparse solution faster than any sequence with constant .the reconstruction snr curves of the actual sequence and three bound sequences with different choices of are demonstrated in fig .[ fig4 ] . as can be seen in the figure , the bound sequence with adaptive is the best estimation among different choices . for a constant ,the larger one leads to faster convergence and less precision . , where , , , .,width=384 ] throughout the iteration for adaptive .,width=384 ] for adaptive , as illustrated in fig .[ fig4 ] , the reconstruction snr reaches steady - state after about iterations .however , referring to fig .[ fig5 ] , the value of keeps decreasing until over iterations , though it impacts little to the convergence behavior .in fact , adaptive will decrease towards throughout the iteration and never stop .nevertheless , the precision of simulation platform limits its variation after it is below .the deviations of the actual iterative sequence and a bound sequence are both proportional to the step - size , with the difference in the scale factor . though the bound is not very strict , it does well in the proof of the convergence of -zap .as proved in theorem 4 , in non - noisy scenario , -zap can reconstruct the original signal at arbitrary precision by choosing the step - size small enough .theorem 7 demonstrates that in noisy scenario the reconstruction snr is determined by both the step - size and noise level .experiment results shown in fig .[ fig6 ] verify the analysis .each combination of step - size and measurement snr is simulated 100 times .experiment results indicate that in non - noisy scenario , the reconstruction snr increases as the decreasing of step - size . in noisy scenario ,the reconstruction snr can not increase arbitrarily due to the impact of noise . for small step - size, the reconstruction snr is mainly determined by noise level .the reconstruction snr is higher when the measurement snr is higher . for large step - size, the step - size mainly controls the reconstruction snr and the reconstruction snr increase as the decreasing of step - size . , , .,width=384 ] the figure also offers a way to choose the step - size under noise .it is not necessary to choose the step - size too small because it benefits little under the impact of noise . for an estimated reconstruction snr ,the best choice of step - size is the value just entered the flat region .this paper provides -zap a comprehensive theoretical analysis .firstly , the mentioned algorithm is proved to be convergent to a neighborhood of the sparse solution with the radius proportional to the step - size of iteration .therefore , it is non - biased and can approach the sparse solution to any extent and reconstruct the original signal exactly . secondly , when the measurements are inaccurate with noise perturbation, -zap can also approach the sparse solution and the precision is linearly reduced by the disturbance power . in addition , some related topics about the initial value and the convergence rate are also discussed .the convergence property of -compressible signal by -zap is also discussed .finally , experiments are conducted to verify the theoretical analysis on the convergence process and illustrate the impacts of parameters on the reconstruction results .it is to be proved that defined in ( [ eq1 ] ) has a positive lower bound respectively for and . for ,the function is continuous for and the domain is a bounded closed set . as a basic theorem in calculus , the value of a continuous function can reach the infinum if the domain is a bounded closed set . as a consequence , there exists an , such that . by the uniqueness of and the definition of , is positive in .then is positive and this leads to the conclusion that the infimum of is positive in . on the other hand, it will be proved that has a positive lower bound for .any vector in the solution space of can be represented by where denote the distance and direction , respectively .considering the definition of , one has combining ( [ eq3 ] ) with ( [ eq7 ] ) , one gets as a consequence , for , the objective function can be simplified as index set is the support set of . denotes the complement of . for ,considering the definition of , for , considering and the definition of in ( [ defineu ] ) , consequently , can be rewritten as a function of , where and it can be seen that is continuous for and the domain of is . since the domain of is the intersection of two closed sets and the first setis bounded , it is a bounded closed set and can reach the infimum .then has the minimum . by the uniqueness of , is positive , consequently . to sum up, the lower bound of is positive for which completes the proof of lemma 1 .by denoting as the iterative deviation and subtracting the unique solution from both sides of ( [ eq5 ] ) , one has according to ( [ eq57 ] ) , considering and using lemma 1 , one can shrink the second item of ( [ eq10 ] ) to using ( [ eq12 ] ) and ( [ eq10 ] ) , one has consequently , for any , if one has theorem 4 is proved .noticing that is in the kernel of and is a symmetric projection matrix to the solution space , with ( [ eq4 ] ) and ( [ defineu ] ) , one has because is a unit vector , it can be further derived that consider the definition of in ( [ eqinlemma2 ] ) and ( [ eq35 ] ) , where and are defined in ( [ eq15 ] ) . combining ( [ usignzoom ] ) and ( [ eq13 ] ) , consequently , the left inequality of ( [ eq24 ] ) is proved. now let s turn to the right inequality of ( [ eq24 ] ) . because of the property of projection matrix , , the eigenvalue of is either or . for all ,one has where denotes the eigenvalue set of .the arbitrariness of leads to therefore , theorem 5 is proved .for satisfying ( [ eq2 ] ) , there exists such that considering the recursion of sequence in ( [ eq21 ] ) , it is expected to prove that when using ( [ eq14 ] ) , the difference between the left side and the right side of ( [ eq8 ] ) is -\frac{2\gamma t}{\mu'}\|{\bf x}_n'-{\bf x}^*\|_2 \le -\gamma^2 t^2\left(1-\frac{1}{\mu'}\right)^2<0.\end{aligned}\ ] ] as a consequence , ( [ eq8 ] ) holds and it leads to according to ( [ eq41 ] ) , the quantity of decrease by each step is at least . considering that has a faster convergence rate than that of , and the trip of is from -ball to -ball , consequently the iteration number is at most to lemma 2 , the iteration number needed from -neighborhood to -neighborhood is at most where and is larger than .assume that obeys where is a positive integer . utilizing ( [ eqnton1 ] ) , the total iteration number from -neighborhood to -neighborhood is at most which is less than thus theorem 6 is proved .the relation between ( [ in1 ] ) and ( [ in2 ] ) comes from the following plain algebra , \\ < & \frac{k_0}{t}\left[m+\frac{1}{\mu_0 - 1}+\frac{1}{\mu_0}(\ln{(m-1)}+1)\right]\\ = & \frac{k_0}{t}\left[m+\frac{1}{\mu_0}\ln{(m-1)}+\left(\frac{1}{\mu_0 - 1}+\frac{1}{\mu_0}\right)\right]\\ < & \frac{m_0}{t\gamma}+\frac{k_0}{t}\ln{\left(\frac{m_0}{k_0\gamma}\right)+\frac{k_0}{t}\frac{\mu_0}{\mu_0 - 1 } } = \text{(\ref{in2})}.\end{aligned}\ ] ]similar to ( [ eq10 ] ) , by defining and , the deviation iterates by from lemma 3 and referring to ( [ eq12 ] ) , one has next the third item of ( [ eq36 ] ) will be studied . by the property of symmetric matrices , where and denote its eigenvalues .notice that , therefore is at most one , and at least of the eigenvalues are zeros .consequently , one has where the last step can be derived by it can be easily seen that is positive , if is an invertible matrix . because is a scalar , combining ( [ eq39 ] ) and ( [ eq40 ] ), one has for , if using ( [ eq37 ] ) , ( [ eq42 ] ) and ( [ eq43 ] ) , we have combining ( [ eq36 ] ) and ( [ eq44 ] ) , it can be concluded that under the condition of ( [ eq43 ] ) , then theorem 7 is proved .the authors appreciate jian jin and three anonymous reviewers for their helpful comments to improve the quality of this paper .yuantao gu wishes to thank professor dirk lorenz for his notification of the projected subgradient method .g. valenzise , g. prandi , m. tagliasacchi , and a. sarti , `` identification of sparse audio tampering using distributed source coding and compressive sensing techniques , '' _ journal on image and video processing _ ,vol . 2009 ,jan . 2009 .j. wright , y. ma , j. mairal , g. sapiro , t. s. huang , and s. yan , `` sparse representation for computer vision and pattern recognition , '' _ proceedings of the ieee _ ,98 , no . 6 , pp .1031 - 1044 , june 2010 .y. c. pati , r. rezaiifar , and p. s. krishnaprasad , `` orthogonal matching pursuit : recursive function approximation with applications to wavelet decomposition , '' _ proc .27th annu .asilomar conf .signals , syst ., comput . _ , pacific grove , ca , nov .1993 , vol . 1 ,40 - 44 .d. needell and r. vershynin , `` uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit , '' _ foundations of computational mathematics _ , vol . 9 , no . 3 , pp . 317 - 334 , 2009 . d. needell and r. vershynin , `` signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit , '' _ ieee j. sel .topics signal process . _ ,vol . 4 , no . 2 , pp . 310 - 316 , apr . 2010 .s. kim , k. koh , m. lustig , s. boyd , and d. gorinvesky , `` an interior - point method for large - scale -regularized least squares , '' _ ieee j. sel .topics signal process ._ , vol . 1 , no . 4 , pp . 606 - 617 , dec . 2007 .m. a. t. figueiredo , r. d. nowak , and s. j. wright , `` gradient projection for sparse reconstruction : application to compressed sensing and other inverse problems , '' _ ieee j. sel .topics signal process . _ ,vol . 1 , no . 4 , pp .586 - 597 , dec .2007 .m. v. afonso , j. m. bioucas - dias , m. a. t. figueiredo , `` a fast algorithm for the constrained formulation of compressive image reconstruction and other linear inverse problems , '' _ icassp 2010 _ , pp .4034 - 4037 , mar . 2010 .i. f. gorodnitsky , j. george , and b. d. rao , `` neuromagnetic source imaging with focuss : a recursive weighted minimum norm algorithm , '' _ electrocephalography and clinical neurophysiology _ , pp .231 - 251 , 1995 .j. jin , y. gu , and s. mei , `` a stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework , '' _ ieee j. sel .topics signal process . _ , vol . 4 , no . 2 , apr . 2010 .d. a. lorenz , m. e. pfetsch , a. m. tillmann , `` infeasible - point subgradient algorithm and computational solver comparison for -minimization , '' submitted , july 2011 .optimization online e - print i d 2011 - 07 - 3100 , http://www.optimization-online.org/db_html/2011/07/3100.html e. j. candes , j. romberg , and t. tao , `` stable signal recovery from incomplete and inaccurate measurements , '' _ communications on pure and applied mathematics _59 , no . 8 , pp . 1207 - 1223 , aug .z. ben - haim , y. c. eldar , and m. elad , `` coherence - based performance guarantees for estimating a sparse vector under random noise , '' _ ieee trans . signal process ._ , vol . 58 , no . 10 , pp . 5030 - 5043 , oct . 2010
|
a recursive algorithm named zero - point attracting projection ( zap ) is proposed recently for sparse signal reconstruction . compared with the reference algorithms , zap demonstrates rather good performance in recovery precision and robustness . however , any theoretical analysis about the mentioned algorithm , even a proof on its convergence , is not available . in this work , a strict proof on the convergence of zap is provided and the condition of convergence is put forward . based on the theoretical analysis , it is further proved that zap is non - biased and can approach the sparse solution to any extent , with the proper choice of step - size . furthermore , the case of inaccurate measurements in noisy scenario is also discussed . it is proved that disturbance power linearly reduces the recovery precision , which is predictable but not preventable . the reconstruction deviation of -compressible signal is also provided . finally , numerical simulations are performed to verify the theoretical analysis . * keywords : * compressive sensing ( cs ) , zero - point attracting projection ( zap ) , sparse signal reconstruction , norm , convex optimization , convergence analysis , perturbation analysis , -compressible signal .
|
quantum key distribution ( qkd ) has the potential to create completely secret communications , and has therefore predictably been received with interest by industrial and security sectors .it is now a mature technology , with commercial qkd systems already available , meaning that evaluating the practical security of real qkd systems has become essential .while it has been proved that a quantum transmission can be unconditionally secure , in figure [ fig:1 ] we can see that in a real system the quantum transmission only makes up a small part of the whole qkd protocol . the quantum part is invariably followed by classical communications steps , usually at least one of key reconciliation , privacy amplification or error correction . * the different stages of a qkd protocol .the dashed line separates the quantum transmission from the subsequent classical stages , or blocks ( ) such as reconciliation , privacy amplification or error correction .the dots indicate the possibility of subsequent classical blocks between the ones depicted and the final key , dependant on the particular protocol used .* b * this is an example showing the classical bocks typically used during a cvqkd protocol , after the continuous variable quantum transmission . ] in protocols such as bb84 , and in all implementations of cvqkd , the classical component is an essential part of the protocol . in others ,it at least has to exist as a form of practical error correction to eliminate experimental errors .it is currently not , and may never be , possible to prevent these experimental errors , implying that currently , if not permanently , a classical communication step is unavoidable .proofs of quantum security do not take into account side channel attacks on either the quantum or the classical channel .there has been a lot of work looking at side channel attacks on the quantum transition [ 4 - 8 ] .this work has naturally led to the development of device independent protocols [ 9 - 11 ] which eliminate the risk of side channel attacks to the quantum channel .unfortunately while these device independent protocols protect the quantum transition , the classical channel still remains vulnerable to side channel attacks .incautious use of the classical component can either reduce the overall security or inadvertently leak information to an eavesdropper .proofs of quantum security , including device independent protocols , do not consider implementation weaknesses in the classical parts of qkd protocols , leaving even the best quantum protocol open to classical side channel attacks .it is essential to be aware of these potential weaknesses . in cvqkd, the quantum exchange must be followed by at least two different classical protocols , key reconciliation and privacy amplification .figure [ fig:2 ] presents a typical sequence for arriving at a key during cvqkd .while these separate protocol ` blocks ' can each be proven to be individually secure , very little thought is given to the security of a combination of multiple blocks run in sequence .the problem with chaining together multiple blocks is that information obtained by an eavesdropper ( eve ) from each block could be cascaded to reveal more information about the key . ,bob s and eve s . at this stage , , and . during the reconciliation stage , , over a set of messages , , the information known by each party is condensed .if , then to form a new set of key elements , ,, . the aim is for , and for this .this means there is some information , where , which is left over .alice and bob bin this information , but eve will keep , as it can be used later to discover more about the system . after privacy amplification , , defined by a set of messages , , alice and bobhave managed to agree a key , . where .eve is left with where .this is the condition for secrecy .unfortunately , eve has much more information available to her than just . during every stage of the protocol, she gains some information about the system .if instead of throwing this information away , as alice and bob do , she keeps it , then she can construct a function to use this and any information she gained during the quantum transmission to cascade back through the different blocks and gain more information about the key .the protocol is in fact only secure if . ] for the sequence in figure [ fig:2 ] , which follows the quantum exchange , the two legitimate users ( alice and bob ) each have some information about the quantum transmission .the information known by alice is denoted x and , that by bob , y. it is also possible that eve will have gained some knowledge of the transmission , and this is denoted z. in information theory , information overlap and secrecy are measured using the mutual information , , and conditional information , , respectively .these are defined to be : where and are the marginal probability distribution functions of and respectively , and is the joint probability distribution function of and . for a secure channel , after the quantum exchange ,the following are in general true : during the reconciliation step , , defined by a set of messages ` ' , the information known by alice is reconciled with that known by bob to produce new key elements , and , by and similarly for . however , as and , there is some information , and , which is thrown away . at this point , .the eavesdropper can follow exactly the same process , arriving at . however , unlike alice and bob , it would be foolish for her to throw away her excess information , , as this can be used later to give her more information about the system .after the subsequent step , privacy amplification , , defined by a set of messages ` ' , alice and bob are each left with a key , , where .eve should emerge from the privacy amplification with where .this ensures secrecy .unfortunately , there is more information available to eve than just .she can compile a function which allows her to extract any excess information revealed by the classical exchange above any information she received quantumly .although there may be isolated circumstances where eve receives precisely zero excess information , in general the system is secure if and only if the following holds : where is the function compiled by eve to maximise her knowledge of .however note that may be either unknown or may change from exchange to exchange , nevertheless its existence must be taken into account . in this paper, we demonstrate that this landscape is more complex than is at first apparent , by showing that simply the transition from a message received quantumly to information processed classically in general lowers the secrecy of cvqkd .this happens even before any classical communication occurs and is the result of local transformations of the data .this paper concerns itself solely with this data digitisation step and the importance of this new type of side channel attack , not the subsequent reconciliation .no attempt is made to propose one reconciliation protocol over another . while this weakness in particular only applies to cvqkd, it highlights the need to consider the classical protocol elements as carefully as the quantum .this case demonstrates a counter intuitive violation of the basic assumption that unbroadcast local operations do not affect the security of the protocol .this violation suggests that other non - quantum protocol components , and the transitions between them , need to be reassessed .in cvqkd a continuous distribution of numbers is transferred through a quantum channel .it is then transformed into a binary string to form the basis of a secret key .cvqkd was proposed with the idea of increasing the key rate from that of qkd , whilst also increasing the ease of implementation , and reducing the need for single photon sources and detectors .in general , in cvqkd [ 1,13 - 19 ] the sender ( alice ) applies separate , random gaussian distributed modulations to the phase and amplitude quadratures of a laser .the receiver ( bob ) then measures either ( or both ) quadratures , obtaining a gaussian distribution of random numbers .noise introduced into the system through any of a number of sources such as shot noise , channel noise , detector noise or an eavesdropper , will mean that each of the points bob measures will have a probability of some error , with respect to that originally sent by alice . in the case of gaussian additive noise , from bob s perspective, the value sent by alice has a gaussian probability distribution centred on the value received by bob . at the end of the process, alice and bob are left with non - identical distributions of continuous random numbers . in order for bob and alice to reconcile a key ,each of their continuous distributions of numbers have to be converted into a binary string .there are a number of different ways in which this can be done . here , in a bid to aid transparency only one method , called slicing , is examined .slicing has been succeeded by protocols which allow communication across longer distances , and are more optimal to use in practice , such as the one described by leverrier et al . in .this conversion is the first and simplest thing that happens to the data when it exits the quantum channel , and even this has a security risk associated with it .there are a number of different methods of slicing , with different levels of security , ease of implementation and key production rates .the simplest method of slicing is to take values which fall in the positive side of the gaussian mean as binary ` 1 ' , and negative values as binary ` 0 ' .it can be easily argued that very few of the errors in transmission will be converted to errors in the bit string .only those points with an error margin that crosses between the positive and negative sides of the quadrature can create errors in the final bit string .this enables production of one bit per transmitted point , and the errors that do get transferred are later removed using standard classical error correction protocols . slicing is purely a local output , however even this has a security risk associated with it .there are two methods of slicing which show the extremes of these security implications . in the first , every bit of datais encoded as a ` 0 ' .this has no security , and no ability to communicate . in the second , each bit of datais encoded randomly as either ` 0 ' or ` 1 ' .this is completely secure , but again , communication is impossible .a realistic slicing method has to find some middle ground between these two cases , where communication is possible , and the security is maximised .it is possible to obtain a higher information transfer , with the same transmission rate , by dividing ( or ` slicing ' ) the gaussian distribution of numbers into a larger number of sections , referred to as ` bins ' .a change of slicing with two bins ( producing one string bit ) , to slicing with four bins ( allowing two string bits for each transmitted point ) , effectively doubles the data rate .this method does however also increase the number of transmission errors transferred into bit string errors , subsequently referred to as ` transferred errors ' , due to the higher number of boundaries between bins . in a worst case scenario , an eavesdropper ( eve )will have managed to gain significant information about the quantum transmission .she will have measured a gaussian distribution of random numbers , different from those of both bob and alice . as alice and bob can only discuss which slicing method they are going to use over classical channels , to which eve can listen , eve will know which method they are using , and can apply it to her own data in an attempt to keep her key as similar as possible to that of alice and bob .as slicing transfers transmission errors into bit string errors and the different slicing methods transfer different numbers of errors , the slicing method best to use should be chosen on an analysis of the number of transmission errors between alice and bob , and also between eve and the legitimate users . + * .modulation is applied to a laser beam using an electro - optic modulator ( eom ) , which is then passed to the detectors of alice , bob and eve .the signal to noise ratio of each detector is controlled by the combination of half wave plates and polarising beam splitters ( pbs ) .data from simulation at channel transmission is shown in * b * and * c*. bob s data is very close to that of alice , while eve s has considerable noise . ] * .modulation is applied to a laser beam using an electro - optic modulator ( eom ) , which is then passed to the detectors of alice , bob and eve .the signal to noise ratio of each detector is controlled by the combination of half wave plates and polarising beam splitters ( pbs ) .data from simulation at channel transmission is shown in * b * and * c*. bob s data is very close to that of alice , while eve s has considerable noise . ] in order to analyse the slicing methods , a computer was used to simulate a quantum channel , based on that by grosshans _et al_. in . during the simulation , alicewas given a gaussian distribution of random numbers .bob and eve were also given this distribution , but with the addition of some gaussian noise .the channel transmission was varied so that at high transmissions bob received few errors , and eve many ; and the reverse at low transmissions .this simulates the transmissions through a real channel and allows us to examine how the channel transmission affects the secrecy .figure [ fig:3 ] shows the channel and data comparisons for alice , bob and eve .the random number generator was seeded so that each different slicing method always used the same values .we analysed a range of possible slicing methods which demonstrated our key points .two properties of the used slicing methods were varied : firstly the size and positioning of the bins , and secondly , the way in which the bins are numbered .a third method , not used here but mentioned for completeness , is the numerical optimisation of the bin positions to give the maximum mutual information between alice and bob .this method is frequently used by grosshans _et al _ , who alternate each degree of severity of slicing with an error correction protocol . as noted above ,increasing the mutual information between alice and bob does not always increase the secrecy , in fact we show here that sometimes the opposite applies .two methods of bin positioning were used during slicing of the gaussian distribution . in the first, the bins were placed at uniform distances along the x - axis , so that a set range of measured values fell into each bin . in the second method ,the bins were chosen so that there were an equal number of transmitted points in each bin . these are demonstrated in figure [ fig:4 ] . * equal width , and * b * equal probability for the same curve are marked with dashed lines . ] placing the bins so that they are equal in probability would lead to a higher number of transferred errors than using bins with an equal width .this is due to the bunching of bin boundaries around the centre of the histogram ; the centre of the histogram contains more points , and thus more errors , so this placing of bin boundaries will lead to a greater number of transferred errors .three methods of bin numbering were used .the first of these was a standard binary code , as this is the simplest and most common method of digital numbering .it is an instinctive choice . for the second method ,a gray code was chosen as this only has a single ( and therefore the minimum ) bit difference between adjacent bins .standard binary on the other hand , frequently has differences of several bits between bins , meaning that an error in transmission which places a point in the bin neighbouring that from its original position will cause more than one bit error in the final string . for comparison ,a third method was arbitrarily chosen from one of a number of different methods which give significantly more numbers of differing bits between adjacent bins than either binary or gray , causing the maximum disruption to the acquired string .the method chosen for this case was a fibonacci linear feedback shift register ( f - lfsr ) , which works as follows . the first bin is labelled with an appropriate number of ` 0 s ' followed by a ` 1 ' such as ` 0001 ' .for each subsequent bin , an xor operation is performed between the final two bits , and the resultant bit becomes the first bit of the new label .the other digits are all shifted right by one place , with the one on the far right end being discarded .for example , ` 0001 ' becomes ` 1000 ' then ` 0100 ' and ` 0010 ' etc . for simplicity , in this paper , we limit the eavesdropper to using the same numbering scheme as the legitimate parties . in practice , it may be possible for eve to gain more information than in this case , increasing her own closeness to the legitimate users .after the data had been sliced , the mutual information between alice and bob ( ) , alice and eve ( ) and bob and eve ( ) was calculated , and used to determine , where .this is a commonly used measure of security , with key production thought to be impossible if is non - positive .+ + the results of this model are presented in figures 5 - 8 . each of the six different trialled slicing methods were studied , using , and bins in each case .figure [ fig:5 ] shows a comparison of and against the channel capacity for four different slicing methods . for simplicity ,only the numbering methods with the highest ( f - lfsr ) and lowest ( gray code ) numbers of transferred errors are shown , each with both of the different bin positioning methods , and each using bins ( 4 bits to describe each bin ) . and against channel transmission for four different slicing methods , each using bins .method 1 represents an f - lfsr code using bins of equal probability , method 2 is f - lfsr with bins of equal width , method 3 a gray code with bins of equal width and method 4 is a gray code with bins of equal probability . is greater than at channel transmissions below about 0.5 , meaning that in this region , under these conditions , key can not be produced . ]* the secrecy of different slicing methods as a function of channel capacity , and * b * the optimal slicing method for each channel capacity .the slicing methods in * b * are arbitrarily ordered with those having the highest , and fewer transferred errors towards the top .a _ uses a gray code and bins of equal probability , _ b _ uses standard binary with bins of equal probability , _ c _ is a f - lfsr with bins of equal probability and _ d _ is a gray code with bins of equal width .generally alice and bob favour a method with higher numbers of transferred errors at lower channel transmissions . ]it can easily be shown that above a channel transmission of 0.5 , the information shared between alice and bob is higher than the information shared between the eavesdropper and either of the legitimate users . however , below a channel transmission of 0.5 , the reverse is true , implying that secrecy is lost , and no key can be made . while it can be possible to reconcile key for channel transmission below 0.5 ,this is only done using a method known as reverse reconciliation which is examined further on in this paper . for this section, only standard direct reconciliation is used .figure [ fig:6 ] * a * shows against channel transmission for the four most secure slicing methods trialled ( those with the highest ) . it can be seen that as expected , these four methods have all been sliced with the lowest number of bins , however the numbering system and the position of the bins are not constant throughout these four , with the optimal choice changing with the channel transmission .the optimal slicing method at each channel transmission is shown in figure [ fig:6 ] * b * , with the slicing methods arbitrarily ordered with those towards the top having the highest , and thus the lowest number of errors transferred .it can also be seen that at higher transmissions , the codes with fewer transferred errors are preferable in general as alice and bob do not want to introduce errors between themselves .however at lower channel transmissions , the codes which transfer more errors are favoured , due to alice and bob purposefully introducing errors to try and distance themselves from eve .there is a slight rise in figure [ fig:6]*b * between about and channel transmission . in this region , where is very close to but less than , it appears to be advantageous for alice and bob to switch to a slicing method with fewer transferred errors to try and increase their information advantage over eve . in reverse reconciliation , during the classical advantage distillation and error correction protocols , alice changes her data to match that which bob has received .usually , direct reconciliation takes place ( bob changing his data to match what alice sent ) , but it has been shown that using reverse reconciliation can enable a secret key to be produced even at less than channel transmission . and against channel transmission for four different slicing methods , each using bins . for each slicing method, is consistently below . as in fig[fig:5 ] , method 1 is an f - lfsr with bins of equal probability , method 2 shows an f - lfsr with bins of equal width , method 3 a gray code with bin of equal probability , method 4 a grey code with bins of equal width . ] * the secrecy of different slicing methods as a function of channel capacity when using reverse reconciliation , and * b * the optimal slicing methods at each value of channel transmission . as with the direct reconciliation case , _ a _ represents a gray code with bins of equal probability , _b _ is standard binary with bins of equal probability , and _c _ is a f - lfsr also with bins of equal probability .again , the best slicing method to use changes with the channel transmission . ] the condition for key distillation in direct reconciliation is , whereas in reverse reconciliation , it is .as can be seen in figure [ fig:8 ] , for all channel transmissions , so key can always be produced . if is taken to be , then figure [ fig:8 ] is produced .figure [ fig:8 ] * a * shows the same trends as when using direct reconciliation and with the same three best slicing methods , but shows to be positive for all channel transmissions as expected .figure [ fig:8 ] * b * also shows the same trend as the direct reconciliation graphs above transmission , but the opposite trend below that .this is because at low channel transmissions , using reverse reconciliation means that while alice and bob share very little information , eve and bob also share very little information , as is shown in figure [ fig:7 ] .a low error transfer rate will help alice and bob , but not necessarily eve .practical qkd systems consist of several different individual protocols , a quantum transmission , followed by at least one classical protocol .the incautious stacking of several of these protocols together can lead to an unexpected lowering of security .in particular , local transformations of data from quantumly received states to classically computable ones during slicing , can unintuitively lose significant amounts of secrecy .+ + we have shown in figures [ fig:6 ] and [ fig:8 ] that for slicing , the method for which the least secrecy is lost when crossing the quantum classical boundary changes unpredictably with the channel transmission . while general trends are followed , it is impossible to forecast which slicing method is optimal at which values of channel transmission without running prior simulations. additionally , here only a very few different slicing methods have been trialled , whereas in reality there will be a multitude of different methods which would all need to be examined if the best method for each channel transmission is to be found . in general however , for both direct and reverse reconciliation , the smaller the number of bins , the greater the security , and for higher channel transmissions , slicing methods with fewer transferred errors are more optimal .+ both the fact that the best slicing method to choose changes with the channel transmission , and the unpredictability of this have inconvenient consequences for the design of real world cvqkd applications . for example , during free space transmissions through the atmosphere ( for instance during satellite communications ) the channel transmission of the link can vary significantly due to changes in the ionosphere , which can be affected by everything from time of day to solar activity .these large and frequently unpredictable changes in channel transmission make the choice of slicing method unclear .another example would be in a qkd network , an example of which is shown in figure [ fig:9 ] . herethe base station sends out data to separate terminals , all of which are at different distances , and thus are likely to have different channel transmissions .decisions would then have to be made as to whether all the lines used the same slicing method , or if they should be chosen separately .if they were all the same , the choice of method becomes critical , and if they were different , difficulties in implementation would arise .+ + an important further study would be to not limit eve to using the same slicing method as alice and bob , but letting her choose in each circumstance what is best for her .other factors such as the key rate also need to be considered , as it is possible to increase the key rate at the cost of a higher error rate .an upper bound on an acceptable error rate is likely to be provided by the particular classical error correction codes used subsequently .many of these however will reduce the key rate at high levels of noise , so a balance needs to be found .the question of whether there exists a slicing method for a particular protocol which does not alter the security of the system is also raised .+ + in addition to this , slicing is only one of a number of proposed reconciliation methods .others , not examined here , may also be vulnerable to side channel attacks , exploiting either a reduction in security across the quantum classical boundary , or unexpected information leakage in the joining of two or more classical protocols .e.n . , m.e . and f.w .would like to thank the engineering and physical sciences research council for their support , f.w . would like to additionally thank airbus defence and space .
|
experimental quantum key distribution ( qkd ) protocols have to consist of not only the unconditionally secure quantum transmission , but also a subsequent classical exchange that enables key reconciliation and error correction . there is a large body of work examining quantum attacks on the quantum channel , but here we begin to examine classical attacks to both the classical communication and the exchange as a whole . linking together separate secure protocols can unexpectedly leak information to an eavesdropper , even if the components are unconditionally secure in isolation . here we focus specifically on the join between quantum and classical protocols , finding that in just this crossing of the quantum - classical boundary , some security is always and unintuitively lost . this occurs with no communication between the separate parties . while this particular example applies to only continuous variable quantum key distribution ( cvqkd ) , it highlights the need to re - examine the way all individual protocols are actually used .
|
it is only in the past two decades that physicists have intensively studied the structural and/or topological properties of complex networks ( and refs . therein ) .they have discovered that in most real graphs , small and finite loops are rare and insignificant .hence , it was possible to assume their architectures to be locally dominated by trees .these properties have been extensively exploited .for instance , it is surprising how well this assumption works in the case of numerous loopy and clustered networks .therefore , we decided on the minimal spanning tree ( mst ) technique as a particularly useful , canonical tool in graph theory , being a correlation based connected network without any loop ( and refs . therein ) . in the graph ,the vertices ( nodes ) are the companies and the distances between them are obtained from the corresponding correlation coefficients .the required transformation of the correlation coefficients into distances was made according to the simple recipe .we consider the dynamics of an empirical complex network of companies , which were listed on the warsaw stock exchange ( wse ) for the entire duration of each period of time in question . in general , both the number of companies ( vertices ) and distances between them can vary in time .that is , in a given period of time these quantities are fixed but in other periods can be varied . obviously ,during the network evolution some of its edges may disappear , while others may emerge .hence , neither the number of companies nor edges are conserved quantities . as a result ,their characteristics , such as for instance , their mean length and mean occupation layer , are continuously varying over time as discussed below .we applied the mst technique to find the transition of a complex network during its evolution from a hierarchical ( power law ) tree representing the stock market structure before the recent worldwide financial crash to a superstar - like tree ( superhub ) decorated by the hierarchy of trees ( hubs ) , representing the market structure during the period of the crash .subsequently , we found the transition from this complex tree to the power law tree decorated by the hierarchy of local star - like trees or hubs ( where the richest from these hubs could be a candidate for another superhub ) representing the market structure and topology after the worldwide financial crash . we foresee that our results , being complementary to others obtained earlier , can serve as a phenomenological foundation for the modeling of dynamic structural and topological phase transitions and critical phenomena on financial markets initial state ( graph or complex network ) of the wse is shown in figure [ figure:20060309_asien ] in the form of a hierarchical mst .both algorithms are often used . ] .companies ) for the period from 2005 - 01 - 03 to 2006 - 03 - 09 , before the worldwide financial crash .the companies are indicated by the coloured circles ( see the legend for an additional description ) .we focus on the financial company capital partners ( large red circle ) , as later it plays a central role in the mst , shown in figure [ figure:20080812_asien ] .when the link between two companies is in dark grey , the cross - correlation between them is greater , while the distance between them is shorter ( cf .the corresponding scale incorporated there ) .however , the geometric distances between companies , shown in the figure by the lengths of straight lines , are arbitrary , otherwise the tree would be much less readable.,width=453 ] this graph was calculated for companies present on wse for the period from 2005 - 01 - 03 to 2006 - 03 - 09 , i.e. before the worldwide financial crash occurred .we focus on the financial company capital partners .it is a suburban company for the most of the period in question .however , it becomes a central company for the mst presented in figure [ figure:20080812_asien ] , for the period from 2007 - 06 - 01 to 2008 - 08 - 12 , which covers the worldwide financial crash .companies of the wse ) observed for the period from 2007 - 06 - 01 to 2008 - 08 - 12 , which covers the worldwide financial crash .now capital partners becomes a dominant hub ( or superhub ) .it is a temporal giant component , i.e. the central company of the wse.,width=453 ] in other words , for this period of time , capital partners is represented by a vertex which has a much larger number of edges ( or it is of a much larger degree ) than any other vertex ( or company ) .this means that it becomes a dominant hub ( superhub ) or a giant component . in the way described above , the transition between two structurally ( or topologically ) different states of the stock exchangeis realized .we observed the transition from hierarchical ( power law ) mst ( consisting of a hierarchy of local stars or hubs ) to the superstar - like ( or superhub ) mst decorated by the hierarchy of trees ( hubs ) . in figure [ figure : spok_nie_spok ]we compare discrete distributions of vertex degrees fixed for a given period of time . ] .vs. ( where is the vertex degree ) for the hierarchical mst shown in figure [ figure:20060309_asien ] and the superstar - like mst decorated by the hierarchy of trees shown in figure [ figure:20080812_asien ] .one can observe that for the latter mst there is a single vertex ( rhs plot ) , which has a degree much larger ( equalling 53 ) than any other vertex . indeed, this vertex represents the company capital partners , which seems to be a superextreme event or a dragon king , being a giant component of the mst network .,width=453 ] although the distributions obtained are power laws , we can not say that we are here dealing with a barabsi albert ( ba ) type of complex network with their rule of preferential linking of new vertices .this is because for both our trees , the power law exponents are distinctly smaller than 3 ( indeed , the exponent equal to 3 characterizes the ba network ) , which is a typical observation for many real complex networks .remarkably , the rhs plot in figure [ figure : spok_nie_spok ] makes it , perhaps , possible to consider the tree presented in figure [ figure:20080812_asien ] as a power law mst decorated by a temporal dragon king .this is because the single vertex ( representing capital partners ) is located far from the straight line ( in the log - log plot ) and can be considered as a temporally outstanding , superextreme event or a temporal dragon king , which condenses the most of the edges ( or links ) .hence , the probability , where is the degree of the dragon king ( which is the maximal degree here ) .we suggest that the appearance of such a dragon king could be a signature of a crash . for completeness, the mst was constructed for companies of the wse for a third period of time , from 2008 - 07 - 01 to 2011 - 02 - 28 , i.e. after the worldwide financial crash ( cf .figure [ figure:20080701 - 20110228_new ] ) . ) . apparently , capital company is no longer the central hub , but has again become a marginal company ( vertex ) .when the link between two companies is in dark gray , the cross - correlation between them is greater , while the distance between them is shorter .however , the geometric distances between companies , shown in the figure by the length of the straight lines , are arbitrary , otherwise the tree would be much less readable.,width=453 ] it is interesting that several new ( even quite rich ) hubs appeared while the single superhub ( superstar ) disappeared ( as it became a marginal vertex ) .this means that the structure and topology of the network strongly varies during its evolution over the market crash .this is also well confirmed by the plot in figure [ figure:214_new ] , where several points ( representing large hubs ) are located above the power law .apparently , this power law is defined by the slope equal to and can not be considered as a ba complex network .rather , it is analogous to the internet , which is characterized by almost the same slope . it would be an interesting project to identify the actual local dynamics ( perhaps nonlinear ) of our network , which subsequently creates and then annihilates the temporal singularity ( i.e. the temporal dragon king ) .vs. ( where is the vertex degree ) for the mst shown in figure [ figure:20080701 - 20110228_new ] .six points ( associated with several different companies ) appeared above the power law .this means that several large hubs appeared instead of a single superhub .apparently , the richest vertex has here the degree and the corresponding probability .however , this vertex can not be considered as a superextreme event ( or dragon king ) because it is not separated far enough from other vertices.,width=453 ] the considerations given above are confirmed in the plots shown in figures [ figure : tree_length ] and [ figure : mean_layer ] . therewell - defined absolute minima of the normalized length and mean occupation layer vs. time at the beginning of 2008 are clearly shown , respectively .as usual , the normalized length of the mst network simply means the average length of the edge directly connecting two vertices .apparently , this normalized length vs. time has an absolute minimum close to 1 at the beginning of 2008 ( cf .figure [ figure : tree_length ] ) , while at other times much shallower ( local ) minimums are observed .this result indicates the existence of a more compact structure at the beginning of 2008 than at other times .furthermore , by applying the mean occupation layer defined , as usual , by the mean number of subsequent edges connecting a given vertex of a tree with the central vertex ( here capital partners ) , we obtained quite similar results ( cf . the solid curve in figure [ figure : mean_layer ] ) .for comparison , the result based on the other central temporal hubs ( having currently the largest degrees ) was also obtained ( cf . the dotted curve in figure [ figure : mean_layer ] ) .this approach is called the dynamic one .fortunately , all the approaches used above give fully consistent results which , however , require some explanation .in particular , both curves in fig .[ figure : mean_layer ] coincide in the period from 2007 - 06 - 01 to 2008 - 08 - 12 having common abolute minimum located at the beginning of 2008 . to plot the dotted curve the company , which has the largest degrees ,was chosen at each time as a temporal central hub .in general , such a company can be replaced from time to time by other company. however , for the period given above indeed the capital partners has largest degrees ( while other companies have smaller ones , of course ) .this significant observation is clearly confirmed by the behavior of the solid curve constructed at a fixed company assumed as a central hub , which herein it is the capital partners . just outside this period ,the capital partnes is no more a central hub ( becoming again the peripheral one ) as other companies play then his role , although not so spectacular .this results from the observation that the dotted curve in fig .[ figure : mean_layer ] is placed below the solid one outside the second period ( i.e. from 2007 - 06 - 01 to 2008 - 08 - 12 ) .hence , we were forced to restrict the period on august 12 , 2008 and do not consider other period such as between september 2008 and march 2009 , where the most serious drawdown during the worldwide financial crash 2007 - 2009 occurred .perhaps , some precursor of the crash is demonstrated herein by the unstable state of the wse .anyway , the subsequent work should contain a more detailed analysis of the third period .the existence of the absolute minimum ( shown in figure [ figure : mean_layer ] ) for capital partners , and simultaneously the existence of the absolute minimum shown in figures [ figure : tree_length ] in the first quarter of 2008 ( to a satisfactory approximation ) confirms the existence of the star - like structure ( or a superhub ) , as a giant component of the mst , centered around capital partners .we may suppose that the evolution from a marginal to the central company of the stock exchange and again to a marginal company , is stimulated perhaps by the most attractive financial products offered by this company to the market only in the second period of time ( i.e. in the period from 2007 - 06 - 01 to 2008 - 08 - 12 ) .in this work , we have studied the empirical evolving connected correlated network associated with a small size stock exchange , the wse .our result seem somewhat embarrassing that such a marginal capitalization company as capital ( less than one permil of a typical wig20 company , like for instance kghm ) becomes a dominant hub ( superhaub ) in the second period considered ( see fig . 2 for details ) .our work provides an empirical evidence that there is a dynamic structural and topological the first order phase transition in the time range dominated by a crash .namely , before and after this range the superhub ( or the unstable state of the wse ) disappears and we observe the power law mst and power law mst decorated by several hubs and [ figure:214_new ] for details ) , which is a richest vertex having degree equals 30 . ] , respectively . therefore , our results consistently confirm the existence of the dynamic structural and topological phase transitions , which can be roughly summarized as follows : we assume the hypothesis that the first transition can be considered as a signature of a stock exchange crash , while the second one can be understood to be an aftershock .nevertheless , the second transition related to the third period requires a more detailed analysis .indeed , in this period the pkobp to much resemble a superhub ( see figs .[ figure:20080701 - 20110228_new ] and [ figure:214_new ] for details ) , which could play a role of other stable state of the wse .in other words , our work indicates that we deal perhaps with indirect transition ( the first order one ) between two stable components , where the unstable component is surprisingly well seen among them .one of the most significant observations contained in this work comes from plots in figures [ figure : spok_nie_spok ] and [ figure:214_new ] .namely , the exponents of all degree distributions are smaller than 3 , which means that all variances of vertex degrees diverge .this indicates that we are here dealing with criticality as the range of fluctuations is compared with the size of the graph .this means that the network evolution from 2005 - 01 - 03 to 2011 - 020 - 028 takes place within the scaling region containing a critical point .apparently , we are here dealing with scale - free networks , which are ultrasmall worlds instead of that for the small world proportional only to . ]it should be stressed that similar results we also obtained for frankfurt stock exchange .we suppose that our results are complementary to those obtained earlier by drod and kwapie .their results focused on the slow ( stable ) component ( state ) .namely , they constructed the mst network of 1000 highly capitalized american companies .the topology of this mst show its centralization around the most important quite stable node being the general electric .this was found both in the frame of binary and weighted msts .noteworthly , the fact should be stressed in this context that the discontinuous phase transition ( i.e. the first order phase one ) evolves continuously before the continuous phase transition ( i.e. before the second order one ) .this discontinuous phase transition goes over the unstable state involving , perhaps , a superheating state such as the superhub in our case .this can not be considered as a noise in the system but rather should be considered as a result of the natural evolution of the system until the critical point is reached ( cf . and refs . therein , where the role of stable states ( or slow components ) on nyse or nasdaq was considered by using binary and weighted msts ) .we suppose that the phenomenological theory of cooperative phenomena in networks proposed by goltsev et al . ( based on the concepts of the landau theory of continuous phase transitions ) could be a promising first attempt .an alternative view of our results could consider the superhub phase as a temporal condensate ( and refs .therein ) .hence , we can reformulate the phase transitions mentioned above as representing the dynamic transition from the disordered phase into a temporal condensate , and then the transition from the condensate again to some disordered phase .we hope that our work is a good starting point to find similar topological transitions at other markets .for instance , we also studied a medium size stock exchange , the frankfurt stock exchange .because the results obtained resemble very much those found for the warsaw stock exchange , we omitted them here .furthermore , the analytical treatment of the dynamics of such a network remains a challenge .we can summarize this work with the conclusion that it could be promising to study in details the phase transitions considered above , which can define the empirical basis for understanding of stock market evolution as a whole .we are grateful rosario n. mantegna and tiziana di matteo for helpful comments and suggestions .99 s. n. dorogovtsev , a. v. goltsev , and j. f. f. mendes , _ critical phenomena in complex networks _ , rev .phys . * 80 * 12751335 ( 2008 ) .b. bollobs , _ modern graph theory _ , springer , berlin 1998 .r. n. mantegna , _ hierarchical structure in financial market _j. * b 11 * , 193197 ( 1999 ) .g. bonanno , g. calderelli , f. lillo , s. micciche , n. vandewalle , r. n. mantegna , _ networks of equities in financial markets _j. * b 38 * , 363371 ( 1999 ) .r. n. mantegna and h. e. stanley , _ an introduction to econophysics .correlations and complexity in finance _ , cambridge univ .press , cambridge 2000 g. bonanno , f. lillo , r. n. mantegna , _ high - frequency cross - correlation in a set of stocks _ , quant .fin * 1 * , 96104 ( 2001 ) .n. vandewalle , f. brisbois , x. tordoir , _ non - random topology of stock markets _ , quant . fin . * 1 * , 372 - 374 ( 2001 ) .l. kullmann , j. kertsz , k. kaski , _ time dependent cross correlations between different stock returns : a directed network of influence _ , phys .e 66 * , 026125 ( 2002 ) .m. tumminello , t. di .matteo , t. aste , and r.n .mantegna , _ correlation based networks of equity returns sampled at different time horizons _ ,epj b 55(22 ) , 209 - 217 ( 2007 ) .m. tumminello , c. coronello , f. lillo , s. micciche , r. n. mantegna , _ spanning trees and bootstrap reliability estimation in correlation - based networks _ , int. j. bifurc . and chaos * 17 * , 23192329 ( 2007 ) .j. g. brida , w. a. risso , _ hierarchical structure of the german stock market _ ,expert systems with applications * 37 * , 38463852 ( 2010 ) .b. m. tabak , t. r. serra , d. o. cajueiro , _ topological properties of commodities networks _ , eur .j. b * 74 * , 243249 ( 2010 ) . j .-onnela , a. chakraborti , k. kaski , and kertsz , _ dynamic asset trees and portfolio analysis _ , eur .j. b * 30 * , 285288 ( 2002 ) . j .-onnela , a. chakraborti , k. kaski , and kertsz , _ dynamic asset trees and black monday _ , physica a * 324 * , 247252 ( 2003 ) . j .-onnela , a. chakraborti , k. kaski , kertsz , and a. kanto , _ dynamics of market correlations : taxonomy and portfolio analysis _ ,e * 68 * , 056110 - 112 ( 2003 ) .d. sornette , _ why stock markets crash _ , princeton univ . press , princeton and oxford 2003 .a. fronczak , p. fronczak , j. a. holyst , _ phase transitions in social networks _j. b * 59 * , 133139 ( 2002 ) , a. fronczak , p. fronczak , j. a. holyst , _ average path length in random networks _ ,e * 70 * , 056110 - 17 ( 2004 ) . s. drod and j. kwapie , j. speth , _ coherent patterns in nuclei and in financial markets _, aip conf .1261 , 256264 ( 2010 ) .j. kwapie , s. drod , _ physical approach to complex systems _ , phys . rep . * 515 * , 115226 ( 2012 ) .w. weidlich , g. haag , _ concepts and models of a quantitative sociology .the dynamics of interacting populations _ , springer - verlag , berlin 1983 .d. b. west , _ introduction to graph theory _, prentice hall , englewood cliffs , new york 1996 . j. b. kruskal , _ on the shortest spanning subtree of a graph and the travelling salesman problem _ , proc . am .soc . * 7 * , 4850 ( 1956 ) .r. albert , a. -l .barabsi , _ statistical mechanics of complex networks _ , review of modern physics * 74 * , 4797 ( 2001 ) .d. sornette , _ dragon - kings , black swans and the prediction of crises _ , int .j. terraspace and engineering * 2*(1 ) , 117 ( 2009 ) .t. werner , t. gubiec , r. kutner , d. sornette : _ modeling of super - extreme events : an application to the hierarchical weierstrass - mandelbrot continuous - time random walk _ ,. j. special topics , * 205 * , 2752 ( 2012 ) .s. albeverio , v. jentsch and h. kantz ( eds . ) _ extreme events in nature and society _ , springer - verlag , berlin 2006 .y. malevergne and d. sornette , _ extreme financial risks . from dependenceto risk management _ , springer - verlag , berlin 2006 .m. faloutsos , p. faloutsos , ch .faloutsos , _ on power law relationships of the internet topology _ , in sigcomm99 ,proceed . of the conf . on applications , technologies , architectures , and protocols for computer communications , * 29 * , 251262 , harvard university , science center , cambridge , massachusetts 1999 .q. chen , h. chang , r. govindan , s. jamin , s. j. shenker , w. willinger , _ the origin of power laws in internet topologies revisited _ , in proceed . of the annual joint conference of the ieee computer and communications societies 2002 , ieee computer society .d. sornette , _ critical phenomena in natural sciences .chaos , fractals , selforganization and disorder : concepts and tools _ , second eddition , _springer series in synergetics _, springer - verlag , heidelberg 2004 .r. badii , a. politi , _ complexity .hierarchical structures and scaling in physics _ , cambridge univ . press , cambridge 1997 .p. hohenberg and b. halperin , _ theory of dynamic critical phenomena _ , rev .mod . phys . *59 * , 435479 ( 1977 ) .d. j. watts , s. h. strogatz , _ collective - dynamics of `` small world '' networks _ , princeton university press , princeton , new york 1999 .l. a. n. amaral , a. scala , m. barthelemy , h. e. stanley , _ classes of small - worlds networks _ , proceed . of nas usa * 97 * ( 21 ) , 11149 - 11152 ( 2000 ) .r. cohen , s. havlin , _ scale - free networks are ultrasmall _ , phys .lett . * 90 * , 058701 - 14 ( 2003 ) .song , m. tumminello , w - x .zhou , r. n. mantegna : _ evolution of worldwide stock markets , correlation structure and correlation based graphs _ , phys .e * 84 * , 026108 - 19 , ( 2011 ) .a. v. goltsev , s. n. dorogovtsev , and j. f. f. mendes , _critical phenomena in networks _ ,e * 67 * , 026123 - 15 ( 2003 ) .
|
we study the crash dynamics of the warsaw stock exchange ( wse ) by using the minimal spanning tree ( mst ) networks . we find the transition of the complex network during its evolution from a ( hierarchical ) power law mst network , representing the stable state of wse before the recent worldwide financial crash , to a superstar - like ( or superhub ) mst network of the market decorated by a hierarchy of trees ( being , perhaps , an unstable , intermediate market state ) . subsequently , we observed a transition from this complex tree to the topology of the ( hierarchical ) power law mst network decorated by several star - like trees or hubs . this structure and topology represent , perhaps , the wse after the worldwide financial crash , and could be considered to be an aftershock . our results can serve as an empirical foundation for a future theory of dynamic structural and topological phase transitions on financial markets . = by -1
|
in this paper we study the strengths and limitations of lagrangian relaxation applied to the partial cover problem .let be collection of subsets of a universal set with cost and profit , and let be a target coverage parameter .a set is a _ partial cover _ if the overall profit of elements covered by is at least .the objective is to find a minimum cost partial cover .the high level idea behind lagrangian relaxation is as follows . in an ip formulation for partial cover , the constraint enforcing that at least profit is covered is _ relaxed _ : the constraint is multiplied by a parameter and lifted to the objective function .this relaxed ip corresponds , up to a constant factor , to the prize - collecting version of the underlying covering problem in which there is no requirement on how much profit to cover but a penalty of must be paid if we leave element uncovered .an approximation algorithm for the prize - collecting version having the lagrangian multiplier preserving ( lmp ) property is used to obtain values and that are close together for which the algorithm produces solutions and respectively .these solutions are such that is inexpensive but unfeasible ( covering less than profit ) , and is feasible ( covering at least profit ) but potentially very expensive . finally , these two solutions are combined to obtain a cover that is both inexpensive and feasible .broadly speaking there are two ways to combine and .one option is to treat the approximation algorithm for the prize - collecting version as a black box , only making use of the lmp property in the analysis .another option is to focus on a particular lmp algorithm and exploit additional structure that it may offer .not surprisingly , the latter approach has yielded better approximation guarantees .for example , for -median compare the 6-approximation of to the 4-approximation of ; for -mst compare the 5-factor to the 3-factor approximation due to .the results in this paper support the common belief regarding the inherent weakness of the black - box approach .first , we show a lower bound on the approximation factor achievable for partial cover in general using lagrangian relaxation and the black - box approach that matches the recent upper bound of . to overcome this obstacle, we concentrate on kolen s algorithm for prize - collecting totally balanced cover . by carefully analyzing the algorithm s inner workings we identify structural similarities between and , which we later exploit when combining the two solutions . as a resultwe derive an almost tight characterization of the integrality gap of the standard linear relaxation for partial totally balanced cover .this in turn implies improved approximation algorithms for a number of related problems .much work has been done on covering problems because of both their simple and elegant formulation , and their pervasiveness in different application areas . in its most general formthe problem , also known as set cover , can not be approximated within unless . due to this hardness ,easier , special cases have been studied . a general class of covering problems that can be solved efficientlyare those whose element - set incidence matrix is balanced .a matrix is _ balanced _ if it does not contain a square submatrix of odd order with row and column sums equal to 2 .these matrices were introduced by berge who showed that if is balanced then the polyhedron is integral .a matrix is _ totally balanced _if it does not contain a square submatrix with row and column sums equal to 2 and no identical columns .kolen gave a simple primal - dual algorithm that solves optimally the covering problem defined by a totally balanced matrix .a matrix is _ totally unimodular _ if every square submatrix has determinant 0 or .although totally balanced and totally unimodular matrices are subclasses of balanced matrices , the two classes are neither disjoint nor one is included in the other . beyond this point, even minor generalizations can make the covering problem hard .for example , consider the _vertex cover _ problem : given a graph we are to choose a minimum size subset of vertices such that every edge is incident on at least one of the chosen vertices . if is bipartite , the element - set incidence matrix for the problem is totally unimodular ; however , if is a general graph the problem becomes np - hard .numerous approximation algorithms have been developed for vertex cover .the best known approximation factor for general graphs is ; yet , after 25 years of study , the best constant factor approximation for vertex cover remains 2 .this lack of progress has led researchers to seek generalizations of vertex cover that can still be approximated within twice of optimum .one such generalization is the _ multicut _ problem on trees : given a tree and a collection of pairs of vertices , a cover is formed by a set of edges whose removal separates all pairs .the problem was first studied by who gave an elegant primal - dual 2-approximation .a notable shortcoming of the standard set cover formulation is that certain hard - to - cover elements , also known as _ outliers _ , can render the optimal solution very expensive .motivated by the presence of outliers , the unit - profit partial version calls for a collection of sets covering not all but a specified number of elements .partial multicut , a.k.a .-multicut , was recently studied independently by and by , who gave a approximation algorithm .this scheme was generalized by who showed how to design a approximation for any covering problem using lagrangian relaxation and an -lmp approximation as a black box .( their algorithm runs in time polynomial on and the running time of the -lmp approximation . )section [ section : lowerbound ] shows that for partial cover in general no algorithm that uses lagrangian relaxation and an -lmp approximation as a black box can yield an approximation factor better than . in section [ section : p - tbc ]we give an almost tight characterization of the integrality gap of the standard lp for partial totally balanced cover , settling a question posed by .our approach is based on lagrangian relaxation and kolen s algorithm .we prove that for any , where and are the costs of the optimal integral and fractional solutions respectively and is the cost of the most expensive set in the instance . the trade - off between additive and multiplicative error is not an artifact of our analysis or a shortcoming of our approach . on the contrary, this is precisely how the integrality gap behaves .more specifically , we show a family of instances where . in other words, there is an unbounded additive gap in terms of but as it grows the multiplicative gap narrows exponentially fast .finally , we show how the above result can be applied , borrowing ideas , to get a approximation or a quasi - polynomial time -approximation for covering problems that can be expressed with a suitable combination of totally - balanced matrices .this translates into improved approximations for a number of problems : a approximation for the partial multicut on trees , a approximation for partial path hitting on trees , a 2-approximation for partial rectangle stabbing , and a approximation for partial set - cover with -blocks .in addition , the can be removed from the first two approximation guarantees if we allow quasi - polynomial time .it is worth noting that prior to this work , the best approximation ratio for all these problems could be achieved with the framework of . in each caseour results improve the approximation ratio by a multiplicative factor .let be a collection of subsets of a universal set .each set has a cost specified by , and each element has a profit specified by .given a target coverage , the objective of the partial cover problem is to find a minimum cost solution such that , where the notation denotes the overall profit of elements covered by .the problem is captured by the ip below .matrix is an element - set incidence matrix , that is , if and only if element belongs to set ; variable indicates whether set is chosen in the solution ; variable indicates whether element is left uncovered .lagrangian relaxation is used to get rid of the constraint bounding the profit of uncovered elements to be at most .the constraint is multiplied by the parameter , called lagrange multiplier , and is lifted to the objective function .the resulting ip corresponds , up to the constant factor in the objective function , to the prize - collecting version of the covering problem , where the penalty for leaving element uncovered is . { { a } { x } } + { { i } { r } } \geq & { 1 } \hspace{0.5 cm } \\[0.2 cm ] { { p } \cdot { r } } \leq & p(u)-p \\[0.2 cm ] r_i , x_j \in & \ { 0 , 1\ } \\[0.4 cm ] \end{array}\end{gathered}\ ] ] ( 0,1.75)(2.5,1.75 ) ( 2.5,0 ) ( 1.25,-.4 ) \hspace{4em}{{a } { x } } + { { i } { r } } \geq & { 1 } \\[0.2 cm ] r_i , x_j \in & \{0,1\ } \end{array}\end{gathered}\ ] ] let be the cost of an optimal partial cover and be the cost of an optimal prize - collecting cover for a given .let be an -approximation for the prize - collecting variant of the problem .algorithm is said to have the lagrangian multiplier preserving ( lmp ) property if it produces a solution such that note that .thus , therefore , if we could find a value of such that covers exactly profit then is -approximate .however , if , the solution is not feasible , and if , equation does not offer any guarantee on the cost of . unfortunately , there are cases where no value of produces a solution covering exactly profit .thus , the idea is to use binary search to find two values and that are close together and are such that covers less , and covers more than profit .the two solutions are then combined in some fashion to produce a feasible cover .a common way to combine the two solutions returned by the -lmp is to treat the algorithm as a black box , solely relaying on the lmp property in the analysis .more formally , an algorithm for partial cover that uses lagrangian relaxation and an -lmp approximation as a black box is as follows .first , we are allowed to run with as many different values of as desired ; then , the solutions thus found are combined to produce a feasible partial cover .no computational restriction is placed on the second step , except that only sets returned by may be used .[ theorem : lowerbound ] in general , the partial cover problem can not be approximated better than using lagrangian relaxation and an -lmp algorithm as a black box .# 1 ( -1,0)(14,2 ) ( 2,1 ) ( 5,1 ) ( 11,1 ) ( 13,1 ) ( 0,1 ) ( -2.5,1) # 1 ( 0,0)(2,13 ) ( 1,14.5) 5 cm ( -3,-3)(14,15 ) ( 0,9 ) ( 0,6 ) ( 0,0 ) ( 1,-1 ) ( 4,-1 ) ( 10,-1 ) ( 8,13.5) ( -2.6,4) let and be sets as depicted on the right . for each and the intersection consists of a cluster of elements .there are clusters .set is made up of clusters ; set is made up of clusters and two additional elements ( the leftmost and rightmost elements in the picture . )thus and .in addition , there are sets , which are not shown in the picture. set contains one element from each cluster and the leftmost element of .thus .the cost of is , the cost of is , and the cost of is .every element has unit profit and the target coverage is .it is not hard to see that is an optimal partial cover with a cost of 1 .the -lmp approximation algorithm we use has the unfortunate property that it never returns sets from the optimal solution .[ lemma : naughty - lmp ] there exists an -lmp approximation that for the above instance and any value of outputs either or or .hence , if we use as a black box we must build a partial cover with the sets and .note that in order to cover elements either all -sets , or all -sets must be used . in the first case -sets are needed to attain feasibility , and the solution has cost ; in the second case the solution is feasible but again has cost .theorem [ theorem : lowerbound ] follows .one assumption usually made in the literature is that , for some constant , or more generally an additive error in terms of is allowed .this does not help in our construction as can be made arbitrarily small by increasing .admittedly , our lower bound example belongs to a specific class of covering problem ( every element belongs to at most three sets ) and although the example can be embedded into a partial totally unimodular covering problem , it is not clear how to embed the example into other classes .nevertheless , the upper bound of koneman et el . makes no assumption about the underlying problem , only using the lmp property in the analysis .it was entirely conceivable that the factor could be improved using a different merging strategy theorem [ theorem : lowerbound ] precludes this possibility .in order to overcome the lower bound of theorem [ theorem : lowerbound ] , one must concentrate on a specific class of covering problems or make additional assumptions about the -lmp algorithm . in this sectionwe focus on covering problems whose ip matrix is totally balanced .more specifically , we study the integrality gap of the standard linear relaxation for partial totally balanced cover ( p - tbc ) shown below . [theorem : lp - gap ] let and be the cost of the optimal integral and fractional solutions of an instance of p - tbc .then for any .furthermore , for any large enough the exists an instance where . \hspace{2ex } { { a } { x } } + { { i } { r } } \geq & { 1 } \\[0.2 cm ] { { p } \cdot { r } } \leq & p(u ) - p \\[0.2 cm ] r_i , x_e \geq 0 \end{array}\end{gathered}\ ] ] ( 0,1.75)(2.5,1.75 ) ( 2.5,0 ) ( 1.25,-.4)lp duality \hspace{5ex}{{a^t } { y } } & \leq { c } \hspace{0.5 cm } \\[0.2 cm ] { y } & \leq \lambda { p } \\[0.2 cm ] y_i , \lambda & \geq 0 \\[0.4 cm ] \end{array}\end{gathered}\ ] ] our approach is based on lagrangian relaxation and kolen s algorithm for prize - collecting totally balanced cover ( pc - tbc ) .the latter exploits the fact that a totally balanced matrix can be put into greedy standard form by permuting the order of its rows and columns ; in fact , the converse is also true .a matrix is in standard greedy form if it does not contain as an induced submatrix \label{eq : forbidden - matrix}\ ] ] there are polynomial time algorithms that can transform a totally balanced matrix into greedy standard form by shuffling the rows and columns of .since this transformation does not affect the underlying covering problem , we assume that is given in standard greedy form . for the sake of completenesswe describe kolen s primal - dual algorithm for pc - tbc .the algorithm finds a dual solution and a primal solution , which is then pruned in a reverse - delete step to obtain the final solution .the linear and dual relaxations for pc - tbc appear below . \hspace{2ex } { { a } { x } } + { { i } { r } } \geq & { 1 } \\[0.2 cm ] r_i , x_e \geq 0 \end{array}\end{gathered}\ ] ] ( 0,1.75)(2.5,1.75 ) ( 2.5,0 ) ( 1.25,-.4)lp duality \hspace{3ex}{{a^t } { y } } & \leq { c } \hspace{0.5 cm } \\[0.2 cm ] { y } & \leq \lambda { p } \\[0.2 cm ] y_i & \geq 0 \\[0.4 cm ] \end{array}\end{gathered}\ ] ] the residual cost of the set w.r.t . is defined as .the algorithm starts from the trivial dual solution , and processes the elements in increasing column order of .let the index of the current element .its corresponding dual variable , , is increased until either the residual cost of some set containing equals 0 ( we say set becomes tight ) , or equals ( lines 3 - 5 ) .let be the set of tight sets after the dual update is completed .as it stands the cover may be too expensive to be accounted for using the lower bound provided by because a single element may belong to multiple sets in .the key insight is that some of the sets in are redundant and can be pruned .given sets we say that _ dominates _ in if and there exists an item such that and belongs to and , that is , .the reverse - delete step iteratively identifies the largest index in , adds to , and removes and all the sets it dominates .this is repeated until no set is left in ( lines 811 ) .notice that all sets are tight , thus we can pay for set by charging the dual variables of items that belong to . because of the reverse - delete step if then belongs to at most one set in ; thus in paying for we charge covered items at most once . using the fact is in standard greedy form ,it can be shown that if was left uncovered then we can afford its penalty , i.e. , .the solution is optimal for pc - tbc since if we could find a value of such that kolen returns a solution covering _ exactly _ profit , we are done since from it follows that notice that is a feasible for the dual relaxation of p - tbc and its cost is precisely the right hand side of .therefore for this instance ip = dl = lp and theorem [ theorem : lp - gap ] follows .unfortunately , there are cases where no such value of exists .nonetheless , we can always find a _ threshold value _ such that for any infinitesimally small , and produce solutions covering less and more than profit respectively .a threshold value can be found using megiddo s parametric search by making calls to the procedure kolen .let ( ) be the dual solution and ( ) the set of tight sets when kolen is run on ( ) .without loss of generality assume covers more than profit .( the case where covers less than profit is symmetrical : we work with and instead of and . )our plan to prove theorem [ theorem : lp - gap ] is to devise an algorithm to merge and in order to obtain a cheap solution covering at least profit . before describing the algorithm we need to establish some important properties regarding these two solutions and their corresponding dual solutions . for any ,the value of is a linear function of for all .this follows from the fact that is infinitesimally small .furthermore , the constant term in this linear function is .[ lemma : tiny - difference ] for each there exists , independent of , such that . by induction on the number of iteration of the dual update step of kolen , using the fact that the same property holds for the residual cost of the sets .a useful corollary of lemma [ lemma : tiny - difference ] is that , since if the residual cost of a set is non - zero in it must necessarily be non - zero in .the other way around may not hold . at the heart of our approachis the notion of a merger graph .the vertex set of is made up of sets from the two solutions , i.e. , .the edges of are directed and given by this graph has a lot of structure that can be exploited when merging the solutions . the merger graph of and is a forest of out - branchings .first note that is acyclic , since if then necessarily .thus , it is enough to show that the in - degree of every is at most one .suppose otherwise , that is , there exist such that .assume that and ( the remaining cases are symmetrical ) .4.5 cm by definition , we know that and that there exists ( ) that belongs to and ( ) such that ( ) . since is in standard greedy form we can infer that belongs to if , or belongs to if : the diagram on the right shows how , using the fact that does not contain as an induced submatrix , we can infer that the boxed entries must be 1 .in either case we get that dominates in , which contradicts the fact that both belong to .the procedure merge starts from the unfeasible solution and guided by the merger graph , it modifies step by step until feasibility is attained .the operation used to update is to take the symmetric difference of and a subtree of rooted at a vertex , which we denote by . for each root of an out - branchings of we set , until .at this point we return the solution produced by increase .notice that after setting in line 5 , the solution `` looks like '' within . indeed , if all roots are processed then .therefore , at some point we are bound to have and to make the call increase in line 6 . before describing increasewe need to define a few terms .absolute benefit _ of set , which we denote by , be the profit of elements uniquely covered by set , that is , let .note that if , the removal of decreases the profit covered by by at least ; on the other hand , if , its addition increases the profit covered by at least .this notion of benefit can be extended to subtrees , we call this quantity the _ relative benefit _ of with respect to .it shows how the profit of uniquely covered elements changes when we take .note that can positive or negative .everything is in place to explain increase .the algorithm assumes the input solution is unfeasible but can be made feasible by adding some sets in ; more precisely , we assume and .if adding to makes the solution feasible then return . if there exists a child of that can be used to propagate the call down the tree then do thatotherwise , _ split _ the subtree : add to and process the children of , setting until becomes feasible ( lines 6 - 9 ) . at this point and . if then call increase else call decrease and let be the cover returned by the recursive call ( lines 10 - 12 ) . finally , return the cover with minimum cost between and .the twin procedure decrease is essentially symmetrical : initially the input is feasible but can be made unfeasible by removing some sets in ; more precisely and . at a very high level ,the intuition behind the increase / decrease scheme is as follows . in each call one of three thingsmust occur : a feasible cover with a small coverage excess is found ( lines 2 - 3 ) , or the call is propagated down the tree at no cost ( lines 4 - 5 ) , or a subtree is split ( lines 6 - 9 ) . in this case , the cost can not be accounted for , but the offset in coverage is reduced at least by a factor of 3 .if the increase / decrease algorithms split many subtrees ( incurring a high extra cost ) then the offset in coverage must have been very high at the beginning , which means the cost of the dual solution is high and so the splitting cost can be charged to it . in order to flesh out these ideas into a formal proof we need to establish some crucial properties of the merger graph and the algorithms .[ lemma : white - coverage ] if then there exist and such that either or or . [ lemma : alternating ] let be the input of increase / decrease .then at the beginning of each call we have or for all . furthermore , if and then or must have been split in a previous call . [ lemma : precondition ] let be the input of increase / decrease .for increase we always have , and for decrease we have . recall that is also a feasible solution for the dual relaxation of p - tbc and its costis given by .the following lemma proves the upper bound of theorem [ theorem : lp - gap ] .[ lemma : merge ] suppose merge outputs .then for all .let us digress for a moment for the sake of exposition .suppose that in line 6 of merge , instead of calling increase , we return .notice every arc in the merger graph has exactly one endpoint in . by lemma [ lemma : white - coverage ], any element not covered by must have .furthermore , if then there exists at most one set in that covers ; if two such sets exist , one must dominate the other in and , which is not possible . hence , in the fortunate case that , the lemma would follow .of course , this need not happen and this is why we make the call to increase instead of returning .let be the root of the subtree split by increase / decrease .also let the solution right before splitting , and and be the unfeasible / feasible pair of solutions after the splitting , which are used as parameters in the recursive calls ( lines 11 - 12 ) .suppose lines 7 - 9 processed only one child of , this can only happen in increase , in which case but .the same argument used to derive gives us the cost of the missing sets is , thus if the lemma follows . a similar boundcan be derived if the recursive call ends in line 3 before splitting the subtree .finally , the last case to consider is when lines 7 - 9 process two or more children for all . in this case which implies .also , since all elements not covered by must be such that .hence , as before adding the cost of we get the lemma .the results in this paper suggest that lagrangian relaxation is a powerful technique for designing approximation algorithms for partial covering problems , even though the black - box approach may not be able to fully realize its potential. it would be interesting to extend this study on the strengths and limitation of lagrangian relaxation to other problems .the obvious candidate is the -median problem . a -approximation for -median using as a black box an -lmp approximation for facility location .later , gave a 2-lmp approximation for facility location .is the algorithm in optimal in the sense of theorem [ theorem : lowerbound ] ? can the algorithm in be turned into a 2-approximation for -median by exploiting structural similarities when combining the two solutions ?* acknowledgments : * i am indebted to danny segev for sharing an early draft of and for pointing out kolen s work .also thanks to mohit singh and arie tamir for helpful discussions and to elena zotenko for suggesting deriving the result of section [ section : lowerbound ] .
|
lagrangian relaxation has been used extensively in the design of approximation algorithms . this paper studies its strengths and limitations when applied to partial cover . we show that for partial cover in general no algorithm that uses lagrangian relaxation and a lagrangian multiplier preserving ( lmp ) -approximation as a black box can yield an approximation factor better than . this matches the upper bound given by knemann _ et al . _ ( _ esa 2006 _ , pages 468479 ) . faced with this limitation we study a specific , yet broad class of covering problems : partial totally balanced cover . by carefully analyzing the inner workings of the lmp algorithm we are able to give an almost tight characterization of the integrality gap of the standard linear relaxation of the problem . as a consequence we obtain improved approximations for the partial version of multicut and path hitting on trees , rectangle stabbing , and set cover with -blocks . julin mestre
|
scientists across several disciplines have recently become interested in the possibility that quantum - mechanical phenomena may play a role in the energy - transfer processes of photosynthetic organisms and synthetic light - absorbing materials .this interest was generated primarily by the observation of oscillations in cross peaks present in two - dimensional electronic spectra ( 2d es ) .+ oscillations during the waiting time in a 2d es experiment represent the phase evolution of coherent superposition states generated during the experiment ; more specifically , these oscillations are directly related to the phase evolution of off - diagonal elements in the density matrix , known as _ coherences_. coherences can be vibrational , electronic , vibronic , etc . in character . in the simple case of a coupled heterodimer , coherent phase evolution of eigenstate superpositionsis indicative of what can be thought of as _ coherent energy transfer _ between the individual systems ; a basis transformation from the energy eigenbasis of the coupled system into the subsystem basis reveals oscillating populations between the subsystems .these populations oscillate as long as the peaks in a 2d spectrum do .in particular , electronic coherences have been the focus of recent attention with respect to photosynthetic systems .such systems are far more complicated than a coupled - heterodimer , however , it is tantalizing to ask whether oscillations observed in 2d es are also indicative of _ coherent energy transfer _ in photosynthetic systems .+ in contrast to the above discussion , conventional energy transfer between molecules is considered to be an _incoherent _ process .there are many senses in which the term `` incoherent '' applies in this context and , therefore , it may not be immediately obvious how to interpret the notion of _ coherent _ energy transfer . before we motivate our discussion of 2d es , we elaborate on the context in which the terminology is used .+ we begin by considering a collection of atoms that are isolated from the environment and separated by distances smaller than the wavelength of visible light . in quantum electrodynamics , atoms couple to the vacuum modes of the universe , and an initial excitation on one or more of these atoms will result in energy transfer between them .the energy transfer is mediated via virtual photons , where the term `` virtual '' arises because such photons can not be observed .such a mechanism is often considered to be radiationless , however , it was shown by andrews _ et al . _ that it is simply the short - range limit of a unified theory of resonant energy transfer which includes emission of a photon by one atom followed by absorption by another .the rate of energy transfer is highest when the atoms are on resonance and drops off very quickly off - resonance .this process is equivalent to frster s phenomenological treatment of the radiationless mechanism for energy transfer which is now known as frster resonance energy transfer ( fret ) .+ interactions with an environment can lead to fluctuations in energy levels .this not only reduces the rate of energy transfer between resonant atoms , but also facilitates energy transfer between off - resonant atoms by sometimes bringing them into resonance . + for molecular systems , a number of additional physical processes come into play .after excitation , a molecule can lose energy to its surroundings , transitioning from a higher - energy vibrational state to a lower - energy one while still remaining in the electronic excited state .after excitation , excited energy levels can also shift due to reorganization of atomic nuclei .these processes result in a shift in the emission energy with respect to the excitation energy , collectively known as the _ stokes shift_. the combined effect of these processes is that fret between molecules tends to be unidirectional : energy can transfer from high energy molecules ( known as _ donors _ ) to low energy molecules ( known as _ acceptors _ ) due to high overlap between fluorescence spectra of donors and absorption spectra of acceptors ; however , energy tends not to transfer from acceptors to donors due to the low overlap between fluorescence spectra of acceptors and absorption spectra of donors .+ despite all these nuances , the energy - transfer mechanism is always the same : energy transfer mediated via photons . whether this is thought of as a coherent or incoherent process is largely a consequence of how terminology evolved within different disciplines .we now turn our attention to discussing such possible interpretations .+ the most obvious way in which this type of energy transfer could be considered incoherent is due to the lack of coherence between the radiation field that excites the donor and the radiation field that is emitted by the acceptor .this can be seen even with a semiclassical treatment .if we consider the field quantum - mechanically , we can identify other possible definitions . to do so, it will be instructive to review the distinction between _ state coherence _ and _ process coherence _ in quantum mechanics .this was done in the introductory section of a recent paper by kassal _ , also in the context of energy transfer in photosynthesis . + a quantum state is described by a density matrix .off - diagonal elements of contain phase information and are therefore usually called `` coherences '' .state coherences are basis - dependent : a state diagonal in one orthonormal basis will not be diagonal in any other . a basis - independent concept of state coherence can be defined in terms of the state purity ] represents the heaviside step function .the transition dipoles are defined to be where , and are time delays between pulses , as shown in fig .[ fig : signal_generation ] .( [ eq : some ] ) is sufficient for calculating the total generated four - wave mixing signal according to the expression for the electric field given in eq .[ eq : esigproto ] . in the next sections we make the rotating wave approximation andshow how one can use phase matching conservation of energy and momentum to select different parts of the signal .one of the advantages of 2d es over conventional transient - absorption spectroscopy is the ability to select different parts of the total signal by controlling the properties of the incoming pulses , most often the relative time ordering .this is afforded by the noncollinear beam geometry which makes use of phase matching .+ given that the carrier frequencies of the pulses are comparable to the transition energies in the material system , we can invoke the rotating wave approximation ( rwa ) , which eliminates the quickly oscillating frequency terms under the time integral . + we begin by decomposing the transition dipole moment operator into positive and negative frequency parts . only products of the form survive the time integrals performed in eq .( [ eq : aneq ] ) .this amounts to only keeping products in eq .( [ eq : some ] ) . making the rwa , we rewrite the nonlinear polarization by expanding the nested commutator , to give where summations over , and are summations over labels and , e.g. is one term in eq .( [ eq : resp ] ) .the terms in the summation over are [ eq : respj ] \\ f_{2}^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}&-\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}_{4}\hat{\mu}_{3}^{{p}_{3}}\hat{\mu}_{1}^{{p}_{1}}\hat{\rho}_{0}\hat{\mu}_{2}^{{p}_{2}}]\\ f_{3}^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}&-\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}_{4}\hat{\mu}_{2}^{{p}_{2}}\hat{\mu}_{1}^{{p}_{1}}\hat{\rho}_{0}\hat{\mu}_{3}^{{p}_{3}}]\\ f_{4}^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}&\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}_{4}\hat{\mu}_{1}^{{p}_{1}}\hat{\rho}_{0}\hat{\mu}_{2}^{{p}_{2}}\hat{\mu}_{3}^{{p}_{3}}]\,.\end{aligned}\ ] ] the electric field will be out - of - phase with the non - linear polarization of the sample , the real electric field underlying an optical pulse is twice the real part of its analytic signal : $ ] . because the four - wave mixing signal leading to the 2d spectrum is almost always detected using a spectrometer , only the positive frequency components i.e . the analytic signal of the field are required .we can determine the analytic signal by taking the single - sided inverse of the fourier transform of the field if we assume a box beam geometry and are only interested in the signal emitted in the direction specified in fig .[ fig : signal_generation ] , then the phase matching condition that must be satisfied is , where the labels are given in fig .[ fig : signal_generation ] .this eliminates the terms and therefore , the terms and from eq .( [ eq : resp ] ) .furthermore , if the initial state of the system is the ground state , only the terms with ( ) acting from the left ( right ) of survive , i.e. .+ by experimentally controlling the order of the incoming pulses , we can measure different parts of the total signal in eq .( [ eq : resp ] ) . recall that we imposed strict time ordering after eq .( [ eq : imp ] ) .if for example pulse arrives first , e.g. , the only terms to survive the phase - matching condition are [ eq : respr ] \\ f_{2}^{-++}(\tau_{1},\tau_{2},\tau_{3})^*={}&-\mathcal{e}^{-}_{1}\mathcal{e}^{+}_{2}\mathcal{e}^{+}_{3}\mathrm{tr}[\hat{\mu}_{2}^+\hat{\rho}_{0}\hat{\mu}_{1}^-\hat{\mu}_{3}^+\hat{\mu}_{4}]\\ f_{3}^{-++}(\tau_{1},\tau_{2},\tau_{3})^*={}&-\mathcal{e}^{-}_{1}\mathcal{e}^{+}_{2}\mathcal{e}^{+}_{3}\mathrm{tr}[\hat{\mu}_{3}^+\hat{\rho}_{0}\hat{\mu}_{1}^-\hat{\mu}_{2}^+\hat{\mu}_{4}]\\ f_{4}^{-++}(\tau_{1},\tau_{2},\tau_{3})^*={}&\mathcal{e}^{-}_{1}\mathcal{e}^{+}_{2}\mathcal{e}^{+}_{3}\mathrm{tr}[\hat{\mu}_{3}^+\hat{\mu}_{2}^+\hat{\rho}_{0}\hat{\mu}_{1}^-\hat{\mu}_{4}]\,,\end{aligned}\ ] ] where here , the analytic signal is given by only the conjugate terms in eq ( [ eq : resp ] ) .the above pulse ordering can lead to a photon echo and is therefore known as a _ rephasing _ experiment .if , on the other hand , pulse arrives second , e.g. , the only terms to survive the phase - matching condition are [ eq : respnr ] \\ f_{2}^{+-+}(\tau_{1},\tau_{2},\tau_{3})={}&-\mathcal{e}^{+}_{1}\mathcal{e}^{-}_{2}\mathcal{e}^{+}_{3}\mathrm{tr}[\hat{\mu}_{4}\hat{\mu}_{3}^+\hat{\mu}_{1}^+\hat{\rho}_{0}\hat{\mu}_{2}^-]\\ f_{4}^{+-+}(\tau_{1},\tau_{2},\tau_{3})={}&\mathcal{e}^{+}_{1}\mathcal{e}^{-}_{2}\mathcal{e}^{+}_{3}\mathrm{tr}[\hat{\mu}_{4}\hat{\mu}_{1}^+\hat{\rho}_{0}\hat{\mu}_{2}^-\hat{\mu}_{3}^+]\,,\end{aligned}\ ] ] where here , the analytic signal is given by only the nonconjugate terms in eq ( [ eq : resp ] ) .note that there is no term .the explanation for why this term is zero is as follows : the first and third pulses are and , which carry the same sign on their wave vectors and . we know that the sign on pulse is positive , therefore the sign on pulse must also be positivedue to the rwa , a positive wavevector on the third pulse corresponds to a positive frequency component of the transition dipole moment since in this term the third transition dipole moment acts on the ground state from the right , the entire term goes to zero .+ the above pulse ordering does not lead to a photon echo .instead it leads to a free polarization decay ( analogous to the well - known free induction decay ( fid ) in nmr spectroscopy ) and therefore it is known as a _ nonrephasing _ experiment .the last possibility is if pulse arrives third , e.g. . in this casethe only terms to survive the phase matching condition are [ eq : resp2q ] \\ f_{3}^{++-}(\tau_{1},\tau_{2},\tau_{3})={}&-\mathcal{e}^{+}_{1}\mathcal{e}^{+}_{2}\mathcal{e}^{-}_{3}\mathrm{tr}[\hat{\mu}_{4}\hat{\mu}_{2}^+\hat{\mu}_{1}^+\hat{\rho}_{0}\hat{\mu}_{3}^-]\,.\end{aligned}\ ] ] where here , the analytic signal is also given by only the nonconjugate terms in eq ( [ eq : resp ] ) .the above pulse ordering is known as a _ two - quantum _ experiment , and it is used far less often than the rephasing and nonrephasing pulse orderings . a similar line of reasoning to above can be used to see why terms and go to zero .non - unitary processes such as dephasing and population relaxation can be included phenomenologically ; this is typically done in liouville space using the green s function formalism . here , we present an analogous treatment in hilbert space which captures the essential effects of dephasing and population relaxation . + we begin by rewriting eqs ( [ eq : respj ] ) using the definition for the time - dependent transition dipole moment operator given in eq .( [ eq : mut ] ) .we then simplify this expression by making use of the unitary operator properties and as well as the cyclic properties of the trace .this reduces to [ eq : respj2 ] \end{split}\\ \begin{split } f_{2}&^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}-\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}\hat{u}_{3}\hat{\mu}^{{p}_{3}}\hat{u}_{2}\hat{u}_{1}\hat{\mu}^{{p}_{1}}\rho_{0,\textsc{s}}\hat{u}{^{\dagger}}_{1}\hat{\mu}^{{p}_{2}}\hat{u}{^{\dagger}}_{2}\hat{u}{^{\dagger}}_{3 } ] \end{split}\\ \begin{split } f_{3}&^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}-\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}\hat{u}_{3}\hat{u}_{2}\hat{\mu}^{{p}_{2}}\hat{u}_{1}\hat{\mu}^{{p}_{1}}\rho_{0,\textsc{s}}\hat{u}{^{\dagger}}_{1}\hat{u}{^{\dagger}}_{2}\hat{\mu}^{{p}_{3}}\hat{u}{^{\dagger}}_{3 } ] \end{split}\\ \begin{split } f_{4}&^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}\hat{u}_{3}\hat{u}_{2}\hat{u}_{1}\hat{\mu}^{{p}_{1}}\rho_{0,\textsc{s}}\hat{u}{^{\dagger}}_{1}\hat{\mu}^{{p}_{2}}\hat{u}{^{\dagger}}_{2}\hat{\mu}^{{p}_{3}}\hat{u}{^{\dagger}}_{3}]\ , , \end{split}\end{aligned}\ ] ] where is the initial state in the schrdinger picture , , and where . to include the particular non - unitary processes of interest, we then make an ad - hoc substitution where and is an unphysical ` hamiltonian ' that corresponds to the system hamiltonian , with energies replaced by .we note that at this stage , does not have a physical interpretation itself .( [ eq : respj2 ] ) become [ eq : respj3 ] \end{split}\\\label{eq : resp7d } \begin{split } f_{2}&^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}-\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}\hat\lambda_{3}\hat{\mu}^{{p}_{3}}\hat\lambda_{2}\hat\lambda_{1}\hat{\mu}^{{p}_{1}}\rho_{0,\textsc{s}}\hat\lambda{^{\dagger}}_{1}\hat{\mu}^{{p}_{2}}\hat\lambda{^{\dagger}}_{2}\hat\lambda{^{\dagger}}_{3 } ] \end{split}\\ \begin{split } f_{3}&^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}-\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}\hat\lambda_{3}\hat\lambda_{2}\hat{\mu}^{{p}_{2}}\hat\lambda_{1}\hat{\mu}^{{p}_{1}}\rho_{0,\textsc{s}}\hat\lambda{^{\dagger}}_{1}\hat\lambda{^{\dagger}}_{2}\hat{\mu}^{{p}_{3}}\hat\lambda{^{\dagger}}_{3 } ] \end{split}\\ \begin{split } f_{4}&^{{p}_{1}{p}_{2}{p}_{3}}(\tau_{1},\tau_{2},\tau_{3})={}\mathcal{e}^{{p}_{1}}_{1}\mathcal{e}^{{p}_{2}}_{2}\mathcal{e}^{{p}_{3}}_{3}\mathrm{tr}[\hat{\mu}\hat\lambda_{3}\hat\lambda_{2}\hat\lambda_{1}\hat{\mu}^{{p}_{1}}\rho_{0,\textsc{s}}\hat\lambda{^{\dagger}}_{1}\hat{\mu}^{{p}_{2}}\hat\lambda{^{\dagger}}_{2}\hat{\mu}^{{p}_{3}}\hat\lambda{^{\dagger}}_{3}]\ , .\end{split}\end{aligned}\ ] ] after evaluation of the expressions in eqs .( [ eq : respj3 ] ) , we make the substitution .this step can easily be automated with symbolic manipulation software such as _ mathematica _ .we can then recognize as the population relaxation rate for eigenstate , and as the dephasing rate between eigenstates and .+ we note that in order to recover a physical picture , we could only perform the substitution once the expressions were reduced to the form given by eqs .( [ eq : respj2 ] ) and not earlier . eqs .( [ eq : respj3 ] ) are general and can now be used to treat any arbitrary system .in this section we demonstrate how to compute a 2d spectrum for a pair of two - level systems , i.e. qubits , a canonical example in quantum optics and spectroscopy and therefore an excellent candidate to study . the system is simple enough to treat analytically but complex enough to reveal the consequences of coupling .each of the individual systems and consists of an electronic ground and excited state , where . assuming that orbital overlap between and is negligible , then the total hamiltonian for the material system is given by the hamiltonians of the individual systems as well as the coupling hamiltonian : where are the excited - state energies of the two individual systems , is the coupling energy between the two systems , and and .we have set the ground - state energies to zero and .the transition dipole moment operator is given by where and we have set the transition dipoles to be real for simplicity . andview ( a ) depicts the heterodimer in the chromophore representation , where systems and occupy different hilbert spaces .view ( b ) depicts the heterodimer as a composite system , as defined in eq .( [ eq : states ] ) . in the absence of coupling ,all of the wave functions in view ( b ) can be written as product states , for example , .,scaledwidth=40.0% ] [ fig : elevels ] in the standard nomenclature of the field , the eigenbasis of the individual systems and is known as the _ site _ basis , whereas the eigenbasis of the total material system hamiltonian is known as the _ exciton _ basis .diagonalization of eq .( [ eq : totham ] ) allows the total hamiltonian to be written in the exciton basis : where , and the single - exciton energies are expressed in terms of the following convenient parameters : the average of the site energies , the difference , and the mixing angle .the exciton basis is given by [ eq : states ] the transition dipole moment operator for the entire system can also be written in the exciton basis where ={}&\left[\begin{array}{cc}\cos ( \theta ) & \sin(\theta ) \\-\sin(\theta ) & \cos ( \theta)\end{array}\right]\left[\begin{array}{c}\mu_{a } \\\mu_{b}\end{array}\right]\\ \left[\begin{array}{c}\mu_{f\alpha } \\\mu_{f\beta}\end{array}\right]={}&\left[\begin{array}{cc}\sin ( \theta ) & \cos(\theta ) \\\cos(\theta ) & -\sin ( \theta)\end{array}\right]\left[\begin{array}{c}\mu_{a } \\\mu_{b}\end{array}\right]\,.\end{aligned}\ ] ] notice that when there is no coupling , i.e. and therefore , the transition dipole moments reduce to and .the consequences of this will be discussed in the next section .+ to predict the spectra , we require the transition dipole moment operator in the interaction picture , given by eq .( [ eq : mut ] ) , which we express in terms of the positive and negative frequency components : where and where we assumed and made use of and .recall that one of the advantages of 2d es over conventional transient - absorption spectroscopy is the ability to select different parts of the total signal by controlling the properties of the incoming pulses , most often the relative time ordering .this is afforded by the noncollinear beam geometry which makes use of phase matching . in this section, we will use the formalism developed in sec .[ sec : theory ] to predict rephasing , non - rephasing and two - quantum spectra for the heterodimer system described above . here , we disregard non - unitary processes and focus on 2d spectral properties such as location , height , and oscillatory behaviour of peaks .we will extend our example to include dephasing and population relaxation in the next section .+ we first consider a rephasing experiment , where pulse arrives first , .the only terms in the expression for the third - order polarization in eq .( [ eq : resp ] ) are those given in eq .( [ eq : respr ] ) . using the above transition dipole moment operator and evaluating the trace functions yields the following expression forthe analytic signal following convention , we fourier transform with respect to the and variables . for the sake of illustration , we temporarily disregard the heaviside step functions , resuming a complete treatment in the next section on dephasing and population relaxation .the fourier transform is then \ , .\end{split}\end{aligned}\ ] ] if we plot the above signal as a function of and , we obtain four peaks , as shown in fig .[ fig:2dspec]a .their amplitudes are determined by the transition dipoles and we bring attention to the fact that the cross peaks oscillate as a function of at a frequency determined by the energy difference between the excited states and , as shown in fig .[ fig:2dspec]b . at first glance, it may seem that the cross peaks should oscillate out - of - phase due to the and prefactors ; however , recall that the real electric field is twice the real part of its analytic signal . the dependence of both cross peaks will therefore have the same form since .other phase relationships between peaks can also be predicted with a more elaborate treatment , such as the one considered by butkus __ .we can follow the same procedure to produce 2d spectra for the nonrephasing ( nr ) experiment , where pulse arrives second , e.g. .the signal is given by \ , .\end{split}\end{aligned}\ ] ] notice that in a nonrephasing experiment , it is the diagonal peaks that oscillate at the difference frequency during time period , see fig .[ fig:2dspec]d .the diagonal peaks also oscillate in phase with each other .+ 2d spectra are usually displayed as the sum of the rephasing and nonrephasing signals , where the rephasing signal is flipped into the quadrant .this eliminates a problem known as phase twist that is endemic to half - fourier transforms . producing a full - fourier transform in this manner sharpens lineshapes and has certain advantages for information extraction .+ the third option is the two - quantum ( 2q ) experiment , where pulse arrives third , e.g. .this pulse sequence is often used to isolate signals involving the doubly - excited state during the second time period , .such signals are most naturally expressed as a 2q spectrum correlating the two - quantum frequencies to the emission frequencies .therefore the fourier transformation proceeds as which yields \delta(\omega_{2}-\omega_{f})\ , , \end{split}\end{aligned}\ ] ] where all peaks oscillate at the frequency corresponding to the doubly - excited state during . in most casesresearchers are interested in the 2d spectrum for . + in the case of no coupling , the transition dipole moments reduce to and .certain terms then go to zero and the above signals reduce to \end{split}\\ \begin{split } e_{\textrm{nr}}^{(3)}(\omega_{1},\tau_{2},\omega_{3})\propto{}&\mathcal{e}^+_{b}\mathcal{e}^-_{a } \mathcal{e}^+_{c}{}\big [ \mu _ { a}^4\delta ( \omega_{1}-\omega _ { \alpha } ) \delta ( \omega_{3}-\omega _ { \alpha } ) + \mu _ { b}^4\delta ( \omega_{1}-\omega _ { \beta } ) \delta ( \omega_{3}-\omega _ { \beta } ) \big]\ , , \end{split}\end{aligned}\ ] ] where we no longer obtain cross peaks .the presence of cross peaks in a rephasing or nonrephasing 2d spectrum is therefore a signature of coupling between two systems .for a two - quantum experiment , we observe no signal in the case of no coupling this is expected since individual two - level systems do not generate two - quantum signals .the above treatment can be extended to include non - unitary processes , as described in sec .[ sec : deph ] . for a rephasing experiment , the only terms in the expression for the third - order polarization in eq .( [ eq : resp ] ) are , , and where are defined in eq .( [ eq : respj3 ] ) . using the transition dipole moment operator for a coupled dimer derived in section [ sec : system ] and evaluating the trace functionsyields the following expression for the analytic signal \ , , \end{split}\end{aligned}\ ] ] where , and . if , and , this reduces to the expression in eq .( [ eq : repheq ] ) , up to the heaviside step function that was excluded in eq . ([ eq : repheq ] ) for illustrative purposes .+ for a nonrephasing experiment , the only terms in the expression for the third - order polarization in eq .( [ eq : resp ] ) are , and , where are defined in eq .( [ eq : respj3 ] ) . using the transition dipole moment operator for a coupled dimer derived in section [ sec : system ] and evaluating the trace functionsyields the following expression for the analytic signal \ , .\end{split}\end{aligned}\ ] ] fig .[ fig : spec2 ] shows eqs .( [ eq : repheqx ] ) and ( [ eq : nonrepheq2 ] ) for the parameters used in fig .[ fig:2dspec ] and and .notice that the peaks now take on a finite width , where the spectral broadness is determined by the dephasing parameters .if we examine the peaks and cross - peaks at different waiting times , we see that the oscillations decay on a time scale .slight oscillations in diagonal peaks ( cross peaks ) for rephasing ( nonrephasing ) spectra result from their overlap with the strongly oscillating cross peaks ( diagonal peaks ) .+ gaas quantum wells have been studied for decades .several reviews detail how each advance in spectroscopic methodology has led to new insights into the excitonic many - body interactions , and 2d es has been no exception . for our purposes , the two lowest - energy exciton states of a gaas quantum well sample can serve as an example of a heterodimer .the heavy - hole ( excitation at 1540 mev ) and light - hole exciton ( excitation at 1546 mev ) states are coupled because they involve electrons that occupy the same conduction band .+ one important distinction between this sample and the model above is that the double - excitation states the biexcitons are energetically red - shifted from the sum of the single - exciton energies by about 1.5 mev due to the biexciton binding energy .therefore , excited - state absorption pathways lead to red - shifted emission signals . + the rephasing and nonrephasing measurements for are shown in fig .[ fig : gaas2d ] ( a ) and ( b ) , respectively . the bright diagonal peak is due to the heavy - hole exciton .we show extractions from two peaks for ps in fig .[ fig : gaas2d](c ) .the extractions match the predictions for the heterodimer in fig .[ fig:2dspec ] , where only the rephasing contribution to the cross peak oscillates and only the nonrephasing contribution to the diagonal peak oscillates .excitation conditions followed refs . .( b ) and ( c ) extractions from the heavy - hole exciton diagonal peak and one cross peak .only the nonrephasing ( rephasing ) contribution to the diagonal ( cross ) peak oscillates .data provided by d. b. turner and k. a. nelson ( previously unpublished).,scaledwidth=80.0% ]) ) as a function of time .( a ) horizontal lines represent the action of a transition dipole moment operator on the density matrix element .( b ) vertical lines represent the evolution of the density matrix element .( c ) we can infer that whenever acts on the density matrix element , we must include the relevant electric field term .the field contribution is typically depicted by a diagonal arrow , pointing to the right for ( containing a component ) or pointing to the left for ( containing a component ) .the curvy arrow represents the emission of a single photon and the final density matrix element must return to a population term , i.e. , to be considered a valid diagram .the phase of each element oscillates at ; is the population relaxation rate for eigenstate , and is the dephasing rate between eigenstates and . ,scaledwidth=80.0% ] for complicated systems involving multiple chromophores , expansions such as eq .( 44 ) can involve a great number of terms .often only a few terms contain physically interesting information .double - sided feynman diagrams are a diagrammatic method for illustrating and evaluating individual terms in the expression for the generated signal . to demonstrate the relationship , consider the term in eq .( [ eq : resp7d ] ) for a rephasing experiment , with the transition dipole moment operator defined in eq .( [ eq : tdo ] ) ( recall that for a rephasing experiment , the analytic signal is given by only the conjugate terms in eq .( [ eq : resp ] ) ) . using the definition of the transition dipole moment operator in the interaction picture , given in eq .( [ eq : mut ] ) , we can rewrite the complex conjugate of the term in eq .( [ eq : resp7d ] ) as \,,\end{aligned}\ ] ] where is defined in eq .( [ eq : tildee ] ) and we recall that at this stage , does not have a physical interpretation itself .later , we make the substitution and recognize as the population relaxation rate for eigenstate , and as the dephasing rate between eigenstates and . the positive frequency part of the transition dipole moment operator is given by where the subscript is used to track the relationship between the positive and negative frequency components that results from the rwa , i.e. .( [ eq : thing4 ] ) will contain quite a number of terms .we consider just one : \ , .\end{split}\end{aligned}\ ] ] the double - side feynman diagram for this term is shown in fig .[ fig : dsfd ] .the diagram follows the evolution of a single density matrix element as a function of time .the frame which houses the time - evolving density matrix element consists of horizontal and vertical lines .horizontal lines represent the action of a transition dipole moment operator on the density matrix element , as depicted in fig .[ fig : dsfd ] ( a ) .vertical lines represent the evolution of the density matrix element , as depicted in fig .[ fig : dsfd ] ( b ) .recall that by making the rwa , we fixed the relationship .we can therefore infer that whenever acts on the density matrix element , we must include the relevant electric field term .the field contribution is typically depicted by a diagonal arrow , pointing to the right for ( containing a component ) or pointing to the left for ( containing a component ) , as shown in fig .[ fig : dsfd ] ( c ) .the curvy arrow represents the emission of a single photon and the final density matrix element must return to a population term , i.e. , to be considered a valid diagram .a double - sided feynman diagram contains all information required to infer the contribution to the emitted signal from that particular term .+ finally , although immensely valuable , double - sided feynman diagrams have certain limitations .for example , they can not explain many of the most interesting phenomena that experiments have observed , many - body interactions , for example .sometimes these phenomena , coherence transfer for instance , can be later drafted onto the double - sided feynman diagrams .regardless , double - sided feynman diagrams are almost always the first method one uses to describe measured signals .two - dimensional electronic spectroscopy is a tremendously useful tool for characterizing a variety of systems such as atoms , molecules , molecular aggregates and related nanostructures , biological pigment - protein complexes and semiconductor nanostructures .its utility arrises from the simultaneous and complex exploitation of multiple features such as phase - matching , pulse ordering and non - linear light - matter interactions . an unfortunate consequence of this is that the barrier to entry for researchers who would like to develop even a basic understanding of these techniques can sometimes be too high , and the interpretation of results is left to the experts . + for emerging interdisciplinary fields such as those revolving around coherence in energy transfer in photosynthesic systems where a large portion of experimental studies are performed using nonlinear spectroscopic techniques such as 2d es it is crucial for even non - experts in spectroscopy to understand the basics of how to interpret the experimental evidence . + we anticipate that our pedagogical guide will help such researchers , in particular those with a background in quantum optics , to understand the basics of 2d es .furthermore , we hope that this will expand the utility of this technique to areas not traditionally studied with 2d es , such as bose - einstein condensates .we thank darpa for funding under the qube program and the natural sciences and engineering research council of canada .dbt thanks francesca fassioli and scott mcclure for helpful discussions .amb thanks jessica anna , chanelle jumper , tihana mirkovic , daniel oblinsky , evgeny ostroumov and john sipe for helpful discussions .we also thank john sipe and heinz - peter breuer for thoughtful comments on earlier versions of this manuscript .we are extremely grateful to keith a. nelson for kindly allowing us to use the gaas data .engel , t.r .calhoun , e.l .read , t.k .ahn , t. mancal , y.c .cheng , r.e .blankenship , and g.r .flemingevidence for wavelike energy transfer through quantum coherence in photosynthetic systems , * 446*(7137 ) , 7826 ( 2007 ) .schlau - cohen , t.r .calhoun , n.s .ginsberg , e.l .read , m. ballottari , r. bassi , r. van grondelle , and g.r .flemingpathways of energy flow in lhcii from two - dimensional electronic spectroscopy , * 113*(46 ) , 1535215363 ( 2009 ) , pmid : 19856954 .g. panitchayangkoon , d. hayes , k.a .fransted , j.r .caram , e. harel , j. wen , r.e .blankenship , and g.s .engellong - lived quantum coherence in photosynthetic complexes at physiological temperature , * 107 * , 1276612770 ( 2010 ) .r. dinshaw , k.k .lee , m.s .belsley , k.e .wilk , p.m.g .curmi , and g.d .scholesquantitative investigations of quantum coherence for a light - harvesting protein at conditions simulating photosynthesis , * 14 * , 48574874 ( 2012 ) . g.s .schlau - cohen , a. ishizaki , t.r .calhoun , n.s .ginsberg , m. ballottari , r. bassi , and g.r .flemingelucidation of the timescales and origins of quantum electronic coherence in lhcii , * 4*(5 ) , 389395 ( 2012 ) .f. caruso , a.w .chin , a. datta , s.f .huelga , and m.b .pleniohighly efficient energy excitation transfer in light - harvesting complexes : the fundamental role of noise - assisted transport , * 131 * , 105106 ( 2009 ) .d. abramavicius , b. palmieri , d.v .voronine , f. sanda , and s. mukamelcoherent multidimensional optical spectroscopy of excitons in molecular aggregates ; quasiparticle versus supermolecule perspectives , * 109 * , 23502408 ( 2009 ) . a. nemeth , f. milota , t. mancal , t. pullerits , j. sperling , j. hauer , h.f .kauffmann , and n. christensson double - quantum two - dimensional electronic spectroscopy of a three - level system : experiments and simulations , * 133 * , 094505 ( 2010 ) .a. nemeth , f. milota , t. mancal , v. lukes , j. hauer , h.f .kauffmann , and j. sperlingvibrational wave packet induced oscillations in two - dimensional electronic spectra .i. experiments , * 132 * , 184514 ( 2010 ) .n. christensson , f. milota , j. hauer , j. sperling , o. bixner , a. nemeth , and h.f .kauffmannhigh frequency vibrational modulations in two - dimensional electronic spectra and their resemblance to electronic coherence signatures , * 115 * , 53835391 ( 2011 ) .f. milota , j. sperling , a. nemeth , d. abramavicius , s. mukamel , and h.f .kauffmannexcitonic couplings and interband energy transfer in a double - wall molecular aggregate imaged by coherent two - dimensional electronic spectroscopy , * 131 * , 054510 ( 2009 ) .anna , m.r .ross , and k.j .kubarychdissecting enthalpic and entropic barriers to ultrafast equilibrium isomerization of a flexible molecule using 2dir chemical exchange spectroscopy , * 113*(24 ) , 65446547 ( 2009 ) , pmid : 19514782 .ginsberg , j.a .davis , m. ballottari , y.c .cheng , r. bassi , and g.r . flemingsolving structure in the cp29 light harvesting complex with polarization - phased 2d electronic spectroscopy , * 108*(10 ) , 38483853 ( 2011 ) .bristow , d. karaiskaj , x. dai , r.p .mirin , and s.t .cundiffpolarization dependence of semiconductor exciton and biexciton contributions to phase - resolved optical two - dimensional fourier - transform spectra , * 79*(16 ) , 161305(r ) ( 2009 ) .turner , p. wen , d.h .arias , k.a .nelson , h. li , g. moody , m.e .seimens , and s.t .cundiffpersistent exciton - type many - body interactions in gaas quantum wells measured using two - dimensional optical spectroscopy , * 85 * , 201303(r ) ( 2012 ) .n. scherer , d.m .jonas , and g.r .flemingfemtosecond wave packet and chemical reaction dynamics of iodine in solution : tunable probe study of motion along the reaction coordinate , * 99*(1 ) , 153168 ( 1993 ) .a. assion , t. baumert , m. bergt , t. brixner , b. kiefer , v. seyfried , m. strehle , and g. gerbercontrol of chemical reactions by feedback - optimized phase - shaped femtosecond pulses , * 282 * , 919922 ( 1998 ) .sagar , r.r .cooney , s.l .sewall , e.a .dias , m.m .barsan , i.s .butler , and p. kambhampatisize dependent , state - resolved studies of exciton - phonon couplings in strongly confined semiconductor quantum dots , * 77 * , 235321 ( 2008 ) .d. polli , p. altoe , o. weingart , k.m .spillane , c. manzoni , d. brida , g. tomasello , g. orlandi , p. kukura , r.a .mathies , m. garavelli , and g. cerulloconical intersection dynamics of the primary photoisomerization event in vision , * 467 * , 440443 ( 2010 ) .t. mancal , a. nemeth , f. milota , v. lukes , j. hauer , h.f .kauffmann , and j. sperlingvibrational wave packet induced oscillations in two - dimensional electronic spectra .theory , * 132 * , 184515 ( 2010 ) .turner , k.w .stone , k. gundogdu , and k.a .nelsoninvited article : the coherent optical laser beam recombination technique ( colbert ) spectrometer : coherent multidimensional spectroscopy made easier , * 82*(8 ) , 081301 ( 2011 ) .grumstrup , s.h .shim , m.a .montgomery , n.h .damrauer , and m.t .zannifacile collection of two - dimensional electronic spectra using femtosecond pulse - shaping technology , * 15*(25 ) , 1668116689 ( 2007 ) .j. kim , v.m .huxter , c. curutchet , and g.d .scholesmeasurement of electronelectron interactions and correlations using two - dimensional electronic double - quantum coherence spectroscopy , * 113*(44 ) , 1212212133 ( 2009 ) .stone , d.b .turner , k. gundogdu , s.t .cundiff , and k.a .nelsonexciton - exciton correlations revealed by two - quantum , two - dimensional fourier transform optical spectroscopy , * 42*(9 ) , 145261 ( 2009 ) .d. karaiskaj , a.d .bristow , l. yang , x. dai , r.p .mirin , s. mukamel , and s.t .cundifftwo - quantum many - body coherence in two - dimensional fourier - transform spectra of exciton resonances in semiconductor quantum wells , * 104 * , 117401 ( 2010 ) .
|
recent interest in the role of quantum mechanics in the primary events of photosynthetic energy transfer has led to a convergence of nonlinear optical spectroscopy and quantum optics on the topic of energy - transfer dynamics in pigment - protein complexes . the convergence of these two communities has unveiled a mismatch between the background and terminology of the respective fields . to make connections , we provide a pedagogical guide to understanding the basics of two - dimensional electronic spectra aimed at researchers with a background in quantum optics .
|
consider a radio source which gives rise to a brightness distribution , , in the radio sky of the earth .there exists a fourier transform ( ft ) relationship between the true brightness distribution and the complex visibility function measured by a vlbi array as a function of the baseline vector . it is not a simple case of taking the inverse fourier transform of the visbilities to return to the true brightness distribution as , due to the limited uv coverage offered by earth rotation synthesis vlbi , most components of the visibility function are not measured .mathematically this corresponds to multiplying the total visibility by a sampling function , , which eliminates most of the visibilities and leaves the observer with only .this means that , even before noise is taken into account , any attempt to recover the sky brightness distribution from the measured uv visibilites is a mathematically ill - posed problem . deconvolution algorithms are used to attempt to interpolate the information contained in the measured visibilities to reconstruct the unmeasured visibilities and get as true an image of the sky brightness distribution as possible .there is no single `` best '' deconvolution algorithm , however the two most common are outlined below .the clean algorithm ( hgbom , 1974 ) is a standard technique used to deconvolve the measured visibilities and attempt to recover the data from the unmeasured visibilities .first the measured visibilities are fourier transformed to create a `` dirty '' map ( containing only the information from the measured visibilities ) .this map is then described by a set of `` clean '' components ( functions ) .the set of `` clean '' components is then convolved with the clean beam ( a gaussian fit to the primary lobe of the point spread function ) and the residual noise added in to give the final `` clean '' image .images of stokes i , q and u parameters can be made independently using this technique . a second way to deconvolve the visibilities and recoversome of the unmeasured data is to make a model intensity map of the source that satisfies specified mathematical criteria .this model can then be varied until the simulated visibilities agree with the actual data to within a specified limit , thus maximising the similarity of the model intensity and the true map. however , as the model is changed one can not let the model visibilities converge to the actual measured visibilities , or the model would then simply yield the original `` dirty '' map .a regularising parameter is required to stop this convergence . in mem , this parameter is the entropy of the image .consider the following function : where h is the entropy of the model map of the source and is a measure of the difference between the model ( subscript m ) and observed ( subscript d ) visibilities ( there are two such terms , one for intensity , , and a second for polarisation , . and are lagrange parameters , and other conditions are also included which represent additional constraints , such as positivity of the stokes i component .the optimal model of the source maximises the function j above .this results in a balance between entropy ( representing noise , and the effect of unsampled visibilities ) and fidelity to the observed data .+ multiple forms of entropy can be used ; common choices include shannon entropy ( [ shannon ] ) , which is suitable for unpolarised emission , and the gull and skilling entropy ( [ gull ] ) , a generalised form of the shannon entropy , suitable for polarised emission . in these equations , the index represents summation over pixels , and and are the intensity and fractional polarisation , respectively , at pixel . is a bias map , to which the solution defaults in the absence of data .see holdaway and gull and skilling for more details about the mem and different forms of polarisation entropy .both clean and mem are widely used in image processing as deconvolution techniques , however each technique has specific strengths and weaknesses .clean , while intuitive , does not have a firm mathematical footing , and it can be difficult to state the resolution of a clean image exactly .mem , while less intuitive , has a much better mathematical grounding , and has a resolution which can be demonstrated mathematically .as the deconvolution problem is inherantly ill - posed , neither technique is perfect - they can not recreate the `` true '' image , and different types of artefacts occur in each technique .this means that regardless of the imaging technique used , the images produced must be interpreted before any conclusions can be made about features that may be present in them .+ in both techniques , the resolution obtained is inversely proportional to the size of the longest baseline in the observing array . due to the lack of firm mathematical foundations for the clean algorithm , a conservative estimation of its resolutionis usually taken to be the full width at half maximum of the clean beam , which is a gaussian fit to the primary lobe of the point spread function .some information is also present on smaller scales , but can be difficult to interpret .mem s resolution can be shown to be where is the resolution in radians , and is the length of the longest baseline in wavelengths .this resolution is approximately four times smaller than the conservative estimate of the clean algorithm .the standard radio astronomy software package astronomy image processing system ( aips ) includes a task to conduct mem deconvolution of intensity ( stokes i ) , but not polarisation ( stokes q and u ) images ( see cornwell and evans for details ) .the relation between the stokes q and u parameters and the polarised intensity and polarisation angle of a map is shown by equations ( [ poli ] ) and ( [ pola ] ) . we are in the process of investigating a number of ways of implementing fully polarised mem to vlbi polarisation data .we have in the meantime produced a number of `` proof of concept '' mem polarisation maps in aips by devising an algorithm to work around the limitations imposed by the aips mem task ( primarily the non - negativity requirement - whereas stokes i must be positive , stokes q and u can be either positive or negative ) .a major limitation of the work - around devised was the assumption that the location of positive and negative pixels in the q and u maps remained the same as in the original maps ( only their amplitude was allowed to vary ) .the prelimary images shown were obtained with the very long baseline array ( see coughlan et al ) . both the clean and mem images were made from the same uv data , which had been calibrated using standard methods in aips .the clean map was convolved with the clean beam and the mem map convolved with a gaussian beam corresponding to the resolution indicated by ( [ eqn : res ] ) . in all cases ,the mem images lost some of the extended emission visible in the clean images .however , the increased resolution of the mem images provides fuller information about inner jet structure and morphology .the superimposed sticks indicate the local polarisation angles . in all cases thereis good agreement between the mem and clean polarisation angles .both clean and mem offer unique and complementery perspectives on the same uv visibility data .mem appears less sensitive to low intensity emission , but provides higher resolution , allowing jet direction and morphology to be studied in more detail .there is good agreement between the clean and approximate mem polarisation maps we have constructed , suggesting that the approximations made in the work - around used to produce polarised mem maps in aips have not adversely affected the polarisation .however , in order to generate a truely reliable mem polarisation map a form of polarised entropy must be used and we are in the process of implementing the entropy ( [ gull ] ) for this purpose .the synergy between clean and mem in studying polarised emission should then lead to a clearer picture of the polarisation intensity and direction along the jet .we also plan to investigate the construction of mem spectral - index and faraday - rotation maps based on multi - frequency intensity and polarisation vlbi maps .this work was supported by the irish research council for science , engineering and technology ( ircset ) .4 holdaway m. , 1990 , ph.d .thesis , brandeis university .gull s.f . , skilling j. , 1984 , _ indirect imaging _ , ed .j.a . roberts .cambridge university press p 267 .cornwell t.j . , evans k.f . , 1985 , _a&a _ , * 143 * , 77 - 83 .coughlan et al , 2011 , _ proc . european vlbi network symposium_.
|
the maximum entropy method ( mem ) for the deconvolution of radio interferometry images is mathematically well based and presents a number of advantages over the usual clean deconvolution , such as appreciably higher resolution . the application of mem for polarisation imaging remains relatively little studied . clean and mem intensity and polarisation techniques are discussed in application to recently obtained 18 cm vlba polarisation data for a sample of active galactic nuclei .
|
the amplifier is clearly one of the most important components incorporated in almost all current technological devices .the basic function of an autonomous amplifier is simply to transform an input signal to with gain .however , such an amplifier is fragile in the sense that the device parameters change easily , and eventually distortion occurs in the output .this was indeed a most serious issue which had prevented any practical use of amplifiers in , e.g. , telecommunication .fortunately , this issue was finally resolved back in 1927 by black ; there are a huge number of textbooks and articles reviewing this revolutionary work , and here we refer to refs .the key idea is the use of feedback shown in fig . 1 ;that is , an autonomous amplifier called the plant " is combined with a controller " in such a way that a portion of the plant s output is fed back to the plant through the controller . then the output of the whole controlled system is given by where is the gain of the controller .now , if the plant has a large gain , it immediately follows that .hence , the whole system works as an amplifier , simply provided that the controller is a passive device ( i.e. , an attenuator ) with .importantly , a passive device such as a resistor is very robust , and its parameters contained in almost do not change .this is the mechanism of robust amplification realized by feedback control .note , of course , that this feedback architecture is the core of an operational amplifier ( op - amp ) .is the gain of an autonomous amplifier , and is the gain of a passive controller ., width=321 ] surely there is no doubt about the importance of quantum amplifiers .a pertinent quantum counterpart to the classical amplifier is the _ phase - preserving linear amplifier _ ( in what follows , we simply call it the amplifier " ) .in fact , this system has a crucial role in diverse quantum technologies such as communication , weak - signal detection , and state processing .in particular , recent substantial progress in both theory and experiments has further advanced this field .an important fact is that , however , an amplifier must be an active system powered by external energy sources , implying that its parameters are fragile and can change easily .because of this parameter fluctuation , the amplified output signal or state suffers from distortion . as a consequence ,the practical applicability of the quantum amplification is still severely limited .that is , we are now facing the same problem we had 90 years ago . to make the discussion clear ,let us here describe the general quantum - amplification process .ideally the amplifier transforms a bosonic input mode to , where is an auxiliary mode , and the coefficients satisfy from =1 ] .in addition to this fragility , we assume that the signal mode is subjected to optical loss , which is modeled by adding the extra term to the right - hand side of eq . , with the magnitude of the loss and the unwanted vacuum noise .the feedback transmission lines are also lossy , which is modeled by eq . . the blue lines in fig5 are 50 sample values of the autonomous gain in the case and .that is , in fact , due to the parameter fluctuation described above , the amplifier becomes fragile and the amplification gain significantly varies .nonetheless , this fluctuation can be suppressed by feedback ; the red lines in fig .5 are 50 sample values of the controlled gain with attenuation level , whose fluctuation is indeed much smaller than that of .can be suppressed at least by a factor of . butthis means that eq .is conservative , since fig .5 shows that the fluctuation is suppressed about by a factor of 0.2 . ]( note that , because the fluctuation of is very small , the set of sample values looks like a thick line . ) that is , the controlled system is certainly robust against the realistic fluctuation of the device parameters . and that of the non - controlled one ( i.e. , ) versus ( a ) with fixed , and ( b ) with fixed . in both figures , the red solid lines represent , while the blue dotted lines are , at . ,width=332 ] finally , let us investigate how much the excess noise is added to the output of the controlled or noncontrolled specially detuned ndpa .again we set , and the feedback control is conducted with attenuation level .also , the same imperfections considered in the previous subsection are assumed ; that is , the system suffers from the signal loss ( represented with ) and the probabilistic fluctuation of the parameters ( ) ; furthermore , the feedback control is implemented with the lossy transmission lines ( represented with and ) . with this setup fig .6 is obtained , where the red solid lines are sample values of the added noise at the center frequency for the controlled system , given by eq ., while the blue dotted lines represent those of the non - controlled system , given by eq . .( in the figure it appears that six thick lines are plotted , but each is the set of 50 sample values . )figure 6 ( a ) shows and versus the signal loss rate , where for the controlled system we fix ( that is , the feedback transmission lines are very lossy ) . also , fig . 6 ( b ) shows the added noise as a function of , with fixed signal loss .the first crucial point is that , in both figures ( a ) and ( b ) , and are close to each other .this is the fact that can be expected from eq . , which states that and coincide in the large amplification limit .it is also notable that , for all sample values , is smaller than in the denominator and in the numerator. then is , from eq ., upper bounded by , while , from eq ., is upper bounded by .hence , if both of these upper bounds are reached , together with the relation observed in the figure , eq . yields , which means in the large amplification limit . ] ; in other words , the feedback controller reduces the added noise , although in the large amplification limit this effect becomes negligible as proven in eq . .another important feature is that , as seen in fig . 6 ( a ) , the signal loss is the dominant factor increasing the added noise , and the feedback loss does not have a large impact on it , as seen in fig . 6 ( b ) . as consequence ,when is small , the controlled amplifier can perform amplification nearly at the quantum noise limit , with almost no dependence on the feedback loss ; this fact is also consistent with eq . .in summary , the specially detuned ndpa with feedback control functions as a robust , near - minimum - noise ( if ) , and broadband amplifier .the presented feedback control theory resolves the critical fragility issue in phase - preserving linear quantum amplifiers .the theory is general and thus applicable to many different physical setups , such as optics , opto - mechanics , superconducting circuits , and their hybridization .moreover , the feedback scheme is simple and easy to implement , as demonstrated in sec .v. note also that the case of _ phase - conjugating amplification _ can be discussed in a similar way ; see appendix c. in a practical setting , the controller synthesis problem becomes complicated , implying the need to develop a more sophisticated quantum feedback amplification theory , which indeed was established in the classical case .the combination of those classical approaches with the quantum control theory should advance this research direction .another interesting future work is to study genuine quantum - mechanical settings , e.g. , probabilistic amplification .finally , note that feedback control is used in order to reach the quantum noise limit , in a different amplification scheme ( the so - called op - amp mode ) ; connection to these works is also to be investigated .this work was supported in part by jsps grant - in - aid no .here we derive some preliminary results that are used later . first , eq. leads to ; together with the other two equations , we then have similarly , furthermore , we can prove that never hold in the limit of as follows . if , this leads to and , and furthermore , and from eqs . and ; then using eq .we have and accordingly , which leads to a contradiction .now we are concerned with the amplification gain in the limit .it is given by the second term is upper bounded , because from eq . and also does not converge to zero .therefore , we need ( hence , ) to have the condition . finally , the added noise in the controlled system is computed as follows : where eq .is used ; also note that all the noise fields are now vacuum .the third term is given by then from eq .we find in the limit and .also holds due to eq . .as consequence , the added noise in the limit is given by hence , together with eq . , we obtain eq . .the point of this result is that , due to the strong constraint on the noise input fields , which is represented by eq ., the added noise does not explicitly contain the terms that stem from the creation input modes , and .this is because of the passivity property of the controller and the feedback transmission lines that are composed of only the creation modes .note that , as demonstrated above , in general the stability analysis becomes complicated for a complex - coefficient or higher - order transfer function .the _ nyquist method _ is a very useful graphical tool that can deal with such cases , although an exact stability condition is not available .another way is a time - domain approach based on the so - called _ small - gain theorem _ , that produces a sufficient condition for a feedback - controlled system to be stable ; the quantum version of this method will be useful to test the stability of the controlled feedback amplifier .the hermitian conjugate of the second element of eq .( 3 ) is given by .that is , the output is the amplified signal of the conjugated input , with gain ; this is called the phase - conjugating amplification .the feedback control in this case is almost the same as for the phase - preserving amplification .we consider the ideal feedback configuration shown in fig . 2 ( i.e., the noise fields are ignored ) and now focus on the auxiliary output . then the amplification gain is evaluated , in the large amplification limit , as in the first line of the above equation , we have used ; also the last equality comes from the unitarity of , i.e. , .therefore , when the original amplification gain is large ( ) , the controlled system works as a phase - conjugating amplifier with gain . as in the phase - preserving case , this controlled gain is robust compared to the original one .t. c. ralph and a. p. lund , nondeterministic noiseless linear amplification of quantum systems , in quantum communication measurement and computing , edited by a. lvovsky , proceedings of 9th international conference , 155/160 ( aip , new york , 2009 ) .g. zames , on the input - output stability of time - varying nonlinear feedback systems part one : conditions derived using concepts of loop gain , conicity , and positivity , ieee trans .control * 11 * -2 , 228/238 ( 1966 ) .
|
quantum amplification is essential for various quantum technologies such as communication and weak - signal detection . however , its practical use is still limited due to inevitable device fragility that brings about distortion in the output signal or state . this paper presents a general theory that solves this critical issue . the key idea is simple and easy to implement : just a passive feedback of the amplifier s auxiliary mode , which is usually thrown away . in fact , this scheme makes the controlled amplifier significantly robust , and furthermore it realizes the minimum - noise amplification even under realistic imperfections . hence , the presented theory enables the quantum amplification to be implemented at a practical level . also , a nondegenerate parametric amplifier subjected to a special detuning is proposed to show that , additionally , it has a broadband nature .
|
_ title of program : _ pvegas.c + _ computer and operating system tested : _convex spp1200 ( spp - ux 4.2 ) , intel x86 ( linux 2.0 with smp ) , dec alpha ( osf1 3.2 and digital unix ) , sparc ( solaris 2.5 ) , rs/6000 ( aix 4.0 ) + _ programming language : _ ansi - c + _ no . of lines in distributed routine : _ 530the monte carlo method frequently turns out to be the only feasible way to get numerical results of integrals when ill - behaved integrands are involved of which no a priori - knowledge about their behaviour is available .it not only handles step functions and gives reliable error estimates but also has the desireable feature that the rate of convergence is dimension - independent . in the framework of the xloops - project , for instance, massive two - loop feynman - diagrams with exterior momenta are calculated analytically as far as possible with sometimes very ill - behaved integrals left for numerical evaluation over finite two- or three - dimensional volumes .if a function needs to be integrated over a -dimensional volume , one can evaluate over random sample - points with and compute the estimate which has a convergence - rate of for large .similarly , has basically the same behaviour , if the probability density is normalized to unity in : the introduction of the weight - function is equivalent to a transformation in the integration variables where the transformation leaves the boundary unchanged .the adaptive monte carlo method now tries to effectively improve the rate of convergence by choosing properly .as is well known , the modified variance for large is given by : with the central limit theorem implies that for square integrable the distribution of around the true value becomes gaussian and in ( [ sigmasquared ] ) is a reliable error - estimate .as every method of selecting a proper must rely on information about the integrand , only approximate and/or iterative methods are practically in use .the popular ` vegas`-algorithm uses two of these . we will sketch them in a brief discussion of ` vegas ` in section [ sec : about ] .section [ sec : macro ] contains some warnings about a sometimes seen oversimplified macro - parallelized ` vegas ` and in section [ sec : micro ] our approach is presented together with some real - world measurements of efficiency . at some placesexplicit variable - names are mentioned for those readers familiar with g. p. lepage s original code .the two techniques used by ` vegas ` to enhance the rate of convergence are _ importance sampling _ and _ stratified sampling_. importance sampling tries to enhance a weight - function by drawing from previous iterations .it is well known , that the variance is minimized , when this method concentrates the density where the function is largest in magnitude .stratified sampling attempts to enhance the -behaviour of mc integration by choosing a set of random numbers which is more evenly distributed than plain random numbers are .( recall that the simplest method of stratification would evaluate the function on a cartesian grid and thus converge as . )this is done by subdividing the volume into a number of hypercubes and performing an mc integration over sample - points in each .the variance in each hypercube can be varied by shifting the boundaries of the hypercubes between successive iterations ( figure [ fig : grids]a shows an initial grid , [ fig : grids]b the grid at a later stage ) .the optimal grid is established when the variance is equal in all hypercubes .this method concentrates the density where both the function and its gradient are large in magnitude .the split - up of into hypercubes turns out to be the key - point in efficiently parallelizing ` vegas ` . the way ` vegas ` iterates hypercubes across the whole volume is designed in a dimension - independent way . in effect it just amounts to loops packed into each other , each iterating from the lower limit of integration to the upper one .we ll see in section [ sec : micro ] how this looping can be exploited for parallelization with variable grain - size . for a more thorough discussion of ` vegas ` the reader is referred to the literature .the most straightforward approach to make use of a parallel machine with processors is to simply replicate the whole job . instead of having one processorcalculate sample - points , instances of the integrator ( `` workers '' ) are started , each sampling points .subsequently , the results from each processor are averaged taking into account their differing error - estimates .we call this approach macro - parallelization .it is immediately clear that this is trivial to implement and usually results in good performance since the amount of communication among processors is minimized .this approach , however , results in different grids , each less fine than the original one . if the same number of points is sampled in each hypercube and the overall number of points are equal , the amount by which the grid will be coarser is given by the dilution of hypercubes which is .furthermore , in an extreme situation some of the workers might accidentally miss an interesting area and return wrong results way outside the estimated error - bounds and thus completely fail to adapt .we have seen realizations of a slightly improved method which does not suffer from overestimation of single partial results .this method spawns workers , again each evaluating points , and lets each evaluate the cumulative variables and send them to the parent which adds them and subsequently computes the new grid .this method will still suffer from coarse grids but it will adapt more cleanly . in effect , it amounts to synchronizing the grids between workers .table [ tab : comparison ] exemplifies the problems typical for macro - parallelization .it shows the results of an integration over a unit - square .the test - function was a narrow gaussian peak with width and normalized such that the exact result of the integration is unity .all runs were typical ` vegas`-calls : the first 10 iterations were used only to refine the grid , their results were discarded ( entry - point 1 in ` vegas`-jargon ) . in that particular casethe macro - parallelized version took 5 iterations until every processor had `` detected '' the peak .the ones that failed to detect it returned very small values with small error - bounds and the common rules for error - manipulation then grossly overestimated the weight of these erroneous results .the unparallelized version in contrast was able to adapt to the function s shape very early .the last 5 iterations were cumulative : each iteration inherited not only the grid but also the result of the previous one ( entry - point 2 ) .note also that after the grids have adapted to the situation , the macro - parallelized ` vegas ` without synchronization still returns misleading error - bounds .[ tab : comparison ] .15 iterations of two macro - parallelized ` vegas ` ( with ) integrating a sharp gaussian contrasted with an unparallelized run .equal numbers of function calls were sampled in each run . [ cols="<,^,^,^,^",options="header " , ] the macro - parallelized version with grid - synchronization performs better than the one without but still is less able to adapt to the specific integrand , as expected .of course the results of this extreme situation are less pronounced for better - behaved integrands but the general result always holds .it is just a manifestation of a fact well - known to people using ` vegas ` : few large iterations generally result in better estimates than many small ones .what is desired is a method that parallelizes the algorithm but still has the same numerical properties as the sequential version .as has been shown in the previous section , this can not be achieved on a macroscopic level . fortunately , ` vegas ` does offer a convenient way to split up the algorithm and map it onto a parallel architecture . still using a well - understood farmer - worker - model , our approach tries to distribute the hypercubes that make up the domain of integration to the workers .mc integration does not exhibit any boundaries which would need extensive communication among workers but it does need some accounting - synchronization to guarantee that each hypercube is evaluated exactly once .a straightforward broadcast - gather - approach would require one communication per hypercube and would thus generate an irresponsible amount of overhead spoiling efficent scalability .we therefore suggest having each processor evaluate more than one hypercube before any communication is done .let be the number of equal fractions the whole volume is split up into .ideally , we should require the number of fractions to be much smaller than the number of hypercubes : .the problem of dynamic load - balancing can in practice be solved by making the number of fractions much larger than the number of processors : be an integer multiple of would fit best on nodes .however , this is only valid if one assumes that the function exhibits the same degree of complexity in the whole volume and if each node is equally fast . both assumptions are usually unjustified . ] .we thus arrive at the constraint : this inequality can be satisfied in the following convenient way opening up an algorithmically feasible way to implement it : projecting the -dimensional volume onto a -dimensional subspace defines a set of -dimensional sub - cubes .the set of original hypercubes belonging to the same sub - cube make up one fraction to be done by one worker in a single loop .we thus identify with the number of sub - cubes .because the hypercubes belonging to different sub - cubes can be evaluated concurrently we call the -dimensional subspace the _ parallel space _ and its orthogonal complement the -dimensional _ orthogonal space _ ( ) . choosing and be expected to satisfy ( [ constraint ] ) for practical purposes ( figure [ fig : method ] ) .an important issue for every mc - effort is the random number generator ( rng ) .there are two different ways to tackle the problems arising in parallel simulations : * one single rng pre - evaluates a sequence of random numbers which are then assigned without overlap to the processors . *every processor gets a rng of its own and some method has to guarantee that no correlations spoil the result . a look at amdahl s law shows that the second approach is the more attractive one .amdahl s law relates the speedup for parallel architectures with the number of processors and the fraction of code which is executed in parallel : use of concurrently running rngs increases which in turn results in an improved speedup .most compiler libraries provide linear congruential generators which generate sequential pseudorandom numbers by the recurrence with carefully selected , and . because of their short period and long - range correlations reported by de matteis and pagnutti which make parallelization dangerous this type is not suited for large mc - simulations . for our case we therefore decided to build on a slightly modified shift register pseudorandom number generator ( sr ) .this widely employed class of algorithms ( r250 is one example ) generates random - bit sequences by pairwise xoring bits from some given list of binary numbers : here , represents the exclusive - or ( xor ) operator and and are chosen such that the trinomial is primitive modulo two .the so defined ` tausworthe - sequence ' is known to have periodicity .thus , every combination of bits occurs exactly once with the only exception being subsequent zeros ( which would return the trivial sequence of zeros only , if it occured ) .tables of `` magic - numbers '' and can be found in the literature and are provided with our program - sources .note that owing to its exponential growth with , the periodicity easily reaches astronomical lengths which can never be exploited by any machine .uniformly distributed random -bit integers can now be constructed easily by putting together columns of bits from several instances of the same tausworthe - sequence with predefined delays : floating - point numbers in the range can subsequently be computed by dividing by . in the continuous limit of large a random - sequence will have mean , variance as well as the enormous length of the original bit - sequences which in turn guarantees -space uniformity for .in addition , this method is extremely fast because the machine s word - size and xor - operation from the native instruction - set can be exploited .the good properties of this class of generators can be ruined by improper initialization .lewis and payne for instance , suggested initializing the tausworthe - sequence with every bit set to one , introduce a common delay between each column of bits and throw away the first iterations in order to leave behind initial correlations .this is not only slow ( even if a short - cut described by i. dek is used ) , but also results in perspicuous correlations if only becomes large enough .this is a direct result of the exponential growth of the period while the delay and initial iterations grow only linearly .a quicker and less cumbersome initialization procedure was suggested by kirkpatrick and stoll .they noted that initializing the tausworthe - sequence with random - bits from some other generator , will define an ( unknown ) offset somewhere between and in the sequence ( [ tauswortherecurrence ] ) from which iteration can proceed .initializing every column of bits in the integer - sequence ( [ bittoint ] ) with such random - numbers defines different offsets and thus implicitly defines a set of delays as well as the starting - point of the whole sequence .this method does clearly not suffer from initial correlations .the method of kirkpatrick and stoll offers a clean and efficient way for parallelization : as many generators as there are processors can be initialized by random numbers from some generator , for example a simple and well - understood linear congruential one .only the of each of the generators need to be filled .the probability that two of these generators will produce the same sequence because they join the same set of delays can be made arbitrary small by simply choosing big enough .to rule out correlations among the sequences is equivalent to assuming there are no interactions between the shift - register generator and the linear congruential generator .indeed , the methods and the underlying theory are quite different .the method is however still plagued by the known flaws , common to all shift register generators .one examples is the triplet - correlation .it can in principle be cured by an expansion of the method described in . in the case of ` vegas ` however , we see no reason why high - quality rngs should be needed at all and we therefore advocate using simple generators with : stratification lets short - range - correlations only take effect within the hypercubes inside the rectangular grid where very few points are sampled and long - range - correlations become washed out by the grid shifting between iterations .this view is supported by the observation that correlations in sr - generators seem to have been discovered only in calculations more sophisticated than plain mc integration .figure [ fig : scalings ] shows the efficiency at integrating a function consisting of the sum of 8 dilogarithms computed with a method suggested in .the parameters have been chosen such that all the characteristic properties become visible in one single run .the five - dimensional volume was split up into a two - dimensional parallel space and a three - dimensional orthogonal space with each axis subdivided into 21 intervals . points were evaluated in each iteration .what we see are some minor fluctuations modulated on a rather good overall efficiency .the spp1200 consists of hypernodes with 8 processors running in real shared memory each , hence the drop - off at where the second hypernode across an interconnect is first touched .the behaviour for small is thus machine - specific .the sawtooth for larger , in contrast , is characteristic for the algorithm : as the test function does not involve steps or other changes in cost of evaluation , most processors terminate the job assigned to them rather simultaneously .so , at we see each processor evaluating 11 of the fractions and then one processor evaluating the single remaining one .the algorithm thus needs 12 times the time necessary for evaluating one fraction while at it needs only 11 .this behaviour can easily be stopped by raising the dimension of the parallel space to three for instance , thus decreasing the grain - size .the obvious drawback is an incremented communication - overhead .the ideal split - up has to be determined individually for each combination of hardware and problem . for a given , astute users will probably tune their parameters and judiciously in order to take advantage of one of the peaks in figure [ fig : scalings ] . @=11 @=12 ( 0,0)(1,1 ) @=11 (0.1700,0.1888 ) ( 0.1775,0.1888 ) (0.9840,0.1888 ) ( 0.9765,0.1888 ) (0.1700,0.2303 ) ( 0.1775,0.2303 ) (0.9840,0.2303 ) ( 0.9765,0.2303 ) (0.1700,0.2719 ) ( 0.1775,0.2719 ) (0.9840,0.2719 ) ( 0.9765,0.2719 ) (0.1700,0.2719 ) ( 0.1850,0.2719 ) (0.9840,0.2719 ) ( 0.9690,0.2719 ) ( 0.1540,0.2719)0.85 (0.1700,0.3134 ) ( 0.1775,0.3134 ) (0.9840,0.3134 ) ( 0.9765,0.3134 ) (0.1700,0.3549 ) ( 0.1775,0.3549 ) (0.9840,0.3549 ) ( 0.9765,0.3549 ) (0.1700,0.3965 ) ( 0.1775,0.3965 ) (0.9840,0.3965 ) ( 0.9765,0.3965 ) (0.1700,0.4380 ) ( 0.1775,0.4380 ) (0.9840,0.4380 ) ( 0.9765,0.4380 ) (0.1700,0.4796 ) ( 0.1775,0.4796 ) (0.9840,0.4796 ) ( 0.9765,0.4796 ) (0.1700,0.4796 ) ( 0.1850,0.4796 ) (0.9840,0.4796 ) ( 0.9690,0.4796 ) ( 0.1540,0.4796)0.9 (0.1700,0.5211 ) ( 0.1775,0.5211 ) (0.9840,0.5211 ) ( 0.9765,0.5211 ) (0.1700,0.5627 ) ( 0.1775,0.5627 ) (0.9840,0.5627 ) ( 0.9765,0.5627 ) (0.1700,0.6042 ) ( 0.1775,0.6042 ) (0.9840,0.6042 ) ( 0.9765,0.6042 )(0.1700,0.6457 ) ( 0.1775,0.6457 ) (0.9840,0.6457 ) ( 0.9765,0.6457 ) (0.1700,0.6873 ) ( 0.1775,0.6873 ) (0.9840,0.6873 ) ( 0.9765,0.6873 ) (0.1700,0.6873 ) ( 0.1850,0.6873 ) (0.9840,0.6873 ) ( 0.9690,0.6873 ) ( 0.1540,0.6873)0.95 (0.1700,0.7288 ) ( 0.1775,0.7288 ) (0.9840,0.7288 ) ( 0.9765,0.7288 ) (0.1700,0.7704 ) ( 0.1775,0.7704 ) (0.9840,0.7704 ) ( 0.9765,0.7704 )(0.1700,0.8119 ) ( 0.1775,0.8119 ) (0.9840,0.8119 ) ( 0.9765,0.8119 ) (0.1700,0.8535 ) ( 0.1775,0.8535 ) (0.9840,0.8535 ) ( 0.9765,0.8535 ) (0.1700,0.8950 ) ( 0.1775,0.8950 ) (0.9840,0.8950 ) ( 0.9765,0.8950 ) (0.1700,0.8950 ) ( 0.1850,0.8950 ) (0.9840,0.8950 ) ( 0.9690,0.8950 ) ( 0.1540,0.8950)1 (0.1700,0.1680 ) ( 0.1700,0.1780 ) (0.1700,0.8950 ) ( 0.1700,0.8850 ) (0.1881,0.1680 ) ( 0.1881,0.1780 ) (0.1881,0.8950 ) ( 0.1881,0.8850 ) (0.2062,0.1680 ) ( 0.2062,0.1780 ) (0.2062,0.8950 ) ( 0.2062,0.8850 ) (0.2243,0.1680 ) ( 0.2243,0.1780 ) (0.2243,0.8950 ) ( 0.2243,0.8850 ) (0.2424,0.1680 ) ( 0.2424,0.1780 ) (0.2424,0.8950 ) ( 0.2424,0.8850 )(0.2604,0.1680 ) ( 0.2604,0.1780 ) (0.2604,0.8950 ) ( 0.2604,0.8850 ) (0.2785,0.1680 ) ( 0.2785,0.1780 ) (0.2785,0.8950 ) ( 0.2785,0.8850 ) (0.2966,0.1680 ) ( 0.2966,0.8950 ) (0.2966,0.1680 ) ( 0.2966,0.1880 ) (0.2966,0.8950 ) ( 0.2966,0.8750 ) ( 0.2966,0.1260)8 (0.3147,0.1680 ) ( 0.3147,0.1780 ) (0.3147,0.8950 ) ( 0.3147,0.8850 ) (0.3328,0.1680 ) ( 0.3328,0.1780 ) (0.3328,0.8950 ) ( 0.3328,0.8850 )(0.3509,0.1680 ) ( 0.3509,0.1780 ) (0.3509,0.8950 ) ( 0.3509,0.8850 ) (0.3690,0.1680 ) ( 0.3690,0.1780 ) (0.3690,0.8950 ) ( 0.3690,0.8850 ) (0.3871,0.1680 ) ( 0.3871,0.1780 ) (0.3871,0.8950 ) ( 0.3871,0.8850 ) (0.4052,0.1680 ) ( 0.4052,0.1780 ) (0.4052,0.8950 ) ( 0.4052,0.8850 ) (0.4232,0.1680 ) ( 0.4232,0.1780 ) (0.4232,0.8950 ) ( 0.4232,0.8850 ) (0.4413,0.1680 ) ( 0.4413,0.8950 ) (0.4413,0.1680 ) ( 0.4413,0.1880 ) (0.4413,0.8950 ) ( 0.4413,0.8750 ) ( 0.4413,0.1260)16 (0.4594,0.1680 ) ( 0.4594,0.1780 ) (0.4594,0.8950 ) ( 0.4594,0.8850 ) (0.4775,0.1680 ) ( 0.4775,0.1780 ) (0.4775,0.8950 ) ( 0.4775,0.8850 ) (0.4956,0.1680 ) ( 0.4956,0.1780 ) (0.4956,0.8950 ) ( 0.4956,0.8850 ) (0.5137,0.1680 ) ( 0.5137,0.1780 ) (0.5137,0.8950 ) ( 0.5137,0.8850 ) (0.5318,0.1680 ) ( 0.5318,0.1780 ) (0.5318,0.8950 ) ( 0.5318,0.8850 ) (0.5499,0.1680 ) ( 0.5499,0.1780 ) (0.5499,0.8950 ) ( 0.5499,0.8850 ) (0.5680,0.1680 ) ( 0.5680,0.1780 ) (0.5680,0.8950 ) ( 0.5680,0.8850 ) (0.5860,0.1680 ) ( 0.5860,0.8950 ) (0.5860,0.1680 ) ( 0.5860,0.1880 ) (0.5860,0.8950 ) ( 0.5860,0.8750 ) ( 0.5860,0.1260)24 (0.6041,0.1680 ) ( 0.6041,0.1780 ) (0.6041,0.8950 ) ( 0.6041,0.8850 ) (0.6222,0.1680 ) ( 0.6222,0.1780 ) (0.6222,0.8950 ) ( 0.6222,0.8850 ) (0.6403,0.1680 ) ( 0.6403,0.1780 ) (0.6403,0.8950 ) ( 0.6403,0.8850 ) (0.6584,0.1680 ) ( 0.6584,0.1780 ) (0.6584,0.8950 ) ( 0.6584,0.8850 ) (0.6765,0.1680 ) ( 0.6765,0.1780 ) (0.6765,0.8950 ) ( 0.6765,0.8850 ) (0.6946,0.1680 ) ( 0.6946,0.1780 ) (0.6946,0.8950 ) ( 0.6946,0.8850 ) (0.7127,0.1680 ) ( 0.7127,0.1780 ) (0.7127,0.8950 ) ( 0.7127,0.8850 ) (0.7308,0.1680 ) ( 0.7308,0.8330 ) (0.7308,0.8750 ) ( 0.7308,0.8950 ) (0.7308,0.1680 ) ( 0.7308,0.1880 ) (0.7308,0.8950 ) ( 0.7308,0.8750 ) ( 0.7308,0.1260)32 (0.7488,0.1680 ) ( 0.7488,0.1780 ) (0.7488,0.8950 ) ( 0.7488,0.8850 ) (0.7669,0.1680 ) ( 0.7669,0.1780 ) (0.7669,0.8950 ) ( 0.7669,0.8850 ) (0.7850,0.1680 ) ( 0.7850,0.1780 ) (0.7850,0.8950 ) ( 0.7850,0.8850 ) (0.8031,0.1680 ) ( 0.8031,0.1780 ) (0.8031,0.8950 ) ( 0.8031,0.8850 ) (0.8212,0.1680 ) ( 0.8212,0.1780 ) (0.8212,0.8950 ) ( 0.8212,0.8850 ) (0.8393,0.1680 ) ( 0.8393,0.1780 ) (0.8393,0.8950 ) ( 0.8393,0.8850 ) (0.8574,0.1680 ) ( 0.8574,0.1780 ) (0.8574,0.8950 ) ( 0.8574,0.8850 ) (0.8755,0.1680 ) ( 0.8755,0.8330 )(0.8755,0.8750 ) ( 0.8755,0.8950 ) (0.8755,0.1680 ) ( 0.8755,0.1880 ) (0.8755,0.8950 ) ( 0.8755,0.8750 ) ( 0.8755,0.1260)40 (0.8936,0.1680 ) ( 0.8936,0.1780 ) (0.8936,0.8950 ) ( 0.8936,0.8850 )(0.9116,0.1680 ) ( 0.9116,0.1780 ) (0.9116,0.8950 ) ( 0.9116,0.8850 ) (0.9297,0.1680 ) ( 0.9297,0.1780 ) (0.9297,0.8950 ) ( 0.9297,0.8850 ) (0.9478,0.1680 ) ( 0.9478,0.1780 ) (0.9478,0.8950 ) ( 0.9478,0.8850 ) (0.9659,0.1680 ) ( 0.9659,0.1780 ) (0.9659,0.8950 ) ( 0.9659,0.8850 ) (0.9840,0.1680 ) ( 0.9840,0.1780 ) (0.9840,0.8950 ) ( 0.9840,0.8850 ) (0.1700,0.1680 ) ( 0.9840,0.1680 ) ( 0.9840,0.8950 ) ( 0.1700,0.8950 ) ( 0.1700,0.1680 ) ( 0.0420,0.5315) ( 0.5770,0.0630)number of processors ( 0.5770,0.9580)efficiency on a 48-processor convex spp1200 ( 0.8570,0.8540)5-d problem (0.8730,0.8540 ) ( 0.9520,0.8540 ) (0.9840,0.5565 ) ( 0.9840,0.5565 ) ( 0.9659,0.6277 ) ( 0.9478,0.4126 ) ( 0.9297,0.4764 ) ( 0.9116,0.5434 ) ( 0.8936,0.6211 ) ( 0.8755,0.4507 ) ( 0.8574,0.5069 ) ( 0.8393,0.5883 ) ( 0.8212,0.6654 ) ( 0.8031,0.5064 ) ( 0.7850,0.5915 ) ( 0.7669,0.6584 ) ( 0.7488,0.5479 ) ( 0.7308,0.6423 ) ( 0.7127,0.5272 ) ( 0.6946,0.6342 ) ( 0.6765,0.5519 ) ( 0.6584,0.6511 ) ( 0.6403,0.5770 ) ( 0.6222,0.6650 ) ( 0.6041,0.6456 ) ( 0.5860,0.6032 ) ( 0.5680,0.5958 ) ( 0.5499,0.6010 ) ( 0.5318,0.6576 ) ( 0.5137,0.6244 ) ( 0.4956,0.6339 ) ( 0.4775,0.6363 ) ( 0.4594,0.6079 ) ( 0.4413,0.6453 ) ( 0.4232,0.6513 ) ( 0.4052,0.6743 ) ( 0.3871,0.7041 ) ( 0.3690,0.7076 ) ( 0.3509,0.6848 ) ( 0.3328,0.7056 ) ( 0.3147,0.7167 ) ( 0.2966,0.7009 ) ( 0.2785,0.7886 ) ( 0.2604,0.8388 ) ( 0.2424,0.8356 ) ( 0.2243,0.8471 ) ( 0.2062,0.8446 ) ( 0.1881,0.8678 ) ( 0.1700,0.8950 ) @=12we have shown , that for ill - behaved test functions in adaptive mc integrators it is essential to use large sets of sample - points at a time . under these circumstancesa macro - parallelization is not satisfying stringent numerical needs . for the xloops project , we have developed a version of ` vegas ` which does parallelization on a smaller scale and has the same numerical properties as the original one .for the grain - size of the algorithm becomes a parameter .the algorithm can be used as a complete drop - in replacement for the common ` vegas ` .it is currently being used in xloops , where it does the last steps in integrating massive 2-loop feynman diagrams .a portable implementation in ansi- of the outlined algorithm running on every modern smp - unix ( either featuring pthreads , draft 4 pthreads or cps - threads ) can be found at ftp://higgs.physik.uni - mainz.de / pub / pvegas/. hints on how to use it can be found at the same place . using the strutures outlined above, it should be easy to implement a ` mpivegas ` running on machines with distributed memory .upon demand , we can provide such a routine , using the mpi message - passing standard .it is a pleasure to thank alexander frink of thep for clarifying discussions about parallelization and his contribution to making the code stable and karl schilcher for making this work possible .i also wish to thank bas tausk and dirk kreimer of thep as well as markus tacke of our university s computing - center and burkhard dnweg of max - planck - institute for polymer research for stimulating discussions .this work is supported by the ` graduiertenkolleg elementarteilchenphysik bei hohen und mittleren energien ' at university of mainz .000 l. brcher , j. franzkowski , a. frink , d. kreimer : _ introduction to xloops , _ hep - ph/9611378 l. brcher : _ xloops , a package calculating one- and two - loop diagrams , _ nucl .instr . and meth .res . * a 389*. 327 - 332 , ( 1997 ) g. p.lepage : _ a new algorithm for adaptive multidimensional integration , _ j. comput . phys . * 27 * , 192 - 203 , ( 1978 ) g. p.lepage : _ vegas an adaptive multi - dimensional integration program , _ publication clns-80/447 , cornell university , 1980 w. press , s. teukolsky , w. vetterling , b. flannery : _ numerical recipes in c , _ ( second edition ) cambridge university press , 1992 .a. de matteis , s. pagnutti , _ parallelization of random number generators and long - range correlations , _ numer . math .* 53 * , 595 - 608 , ( 1988 ) r. c. tausworthe : _ random numbers generated by linear recurrence modulo two , _ math* 19 * , 201 - 209 , ( 1965 ) t. h. lewis , w. h. payne : _ generalized feedback shift register pseudorandom number algorithm , _ j. of the assoc . for computing machinery * 20 * , 456 - 468 , ( 1973 ) s. kirkpatrick , e. p. stoll : _ a very fast shift - register sequence random number generator ,_ j. comput40 * , 517 - 526 , ( 1981 ) i. dek : _ uniform random number generators for parallel computers , _parallel computing * 15 * , 155 - 164 , ( 1990 ) f. schmid , n. b. wilding : _ errors in monte carlo simulations using shift register random number generartors _ int .mod . phys .* c 6 * , 781 - 787 , ( 1995 ) a. heuer , b. dnweg , a. ferrenberg : _ considerations on correlations in shift register pseudorandom number generators and their removal , _ comput .. commun . * 103 * , 1 - 9 , ( 1997 ) i. vattulainen , t. ala - nissila , k. kankaala : _ physical tests for random numbers in simulations , _ phys .* 73 * , 2513 - 2516 , ( 1994 ) p. d.coddington : _ analysis of random number generators using monte carlo simulation , _ int .* c 5 * , 547 - 560 , ( 1994 ) g. t hooft , m. veltman ._ scalar one - loop integrals .* b 153 * , 365 , ( 1979 ) b. nichols , d. buttlar , j. proulx farrell : _ pthreads programming , _oreilly , sebastopol , ( 1996 ) university of tennessee , knoxville , tennessee , ( 1995 )
|
monte carlo ( mc ) methods for numerical integration seem to be embarassingly parallel on first sight . when adaptive schemes are applied in order to enhance convergence however , the seemingly most natural way of replicating the whole job on each processor can potentially ruin the adaptive behaviour . using the popular vegas - algorithm as an example an economic method of semi - micro parallelization with variable grain - size is presented and contrasted with another straightforward approach of macro - parallelization . a portable implementation of this semi - micro parallelization is used in the xloops - project and is made publicly available . * keywords : * parallel computing , grain - size , monte carlo integration , tausworthe , gfsr .
|
in wireless ad hoc networks , each node may serve as the data source , destination , or relay at different time instants , which leads to a self - organized network .such a decentralized structure makes the traditional network analysis methodology used in centralized wireless networks inadequate .in addition , it is hard to define and quantify the capacity of large wireless ad hoc networks . in the seminal work , gupta and kumar proved that the transport capacity for wireless ad hoc networks , defined as the bit - meters pumped every second over a unit area , scales as in an arbitrary network , where is node density . in ,weber _ et al ._ derived the upper and lower bounds on the transmission capacity of spread - spectrum wireless ad hoc networks , where the transmission capacity is defined as the product between the maximum density of successful transmissions and the corresponding data rate , under a constraint on the outage probability .however , the above work only considered single - hop transmissions . in , with multi - hop transmissions and assuming all the transmissions are over the same transmission range , sousa and silvester derived the optimum transmission range to maximize a capacity metric , called the expected forward progress .zorzi and pupolin extended sousa and silvester s work in to consider rayleigh fading and shadowing .recently , baccelli _ et al . _ proposed a spatial - reuse based multi - hop routing protocol . in their protocol , at each hop , the transmitter selects the best relay so as to maximize the effective distance towards the destination and thus to maximize the spatial density of progress . by assuming each transmitter has a sufficient backlog of packets , weber _ et al ._ in proposed longest - edge based routing where each transmitter selects a relay that makes the transmission edge longest . in , andrews _defined the random access transport capacity . by assuming that all hops bear the same distance with deterministically placed relays , they derived the optimum number of hops and an upper bound on the random access transport capacity .most of the above works with multi - hop transmissions ( e.g. , , , and ) assume that each hop traverses the same distance , which is not practical when nodes are randomly distributed . on the other hand , in and the authors proposed routing protocols with randomly distributed relays ; but they did not address how to optimize the transmission distance at each hop . in this paper , by jointly considering the randomly distributed relays and the optimization for the hop distance , we propose a selection region based multi - hop routing protocol , where the selection region is defined by two parameters : a selection angle and a reference distance . by maximizing the expected density of progress , we derive the upper bound on the optimum reference distance and the relationship between the optimum reference distance and the optimum selection angle .the rest of the paper is organized as follows .the system model and the routing protocol are described in section ii .the selection region optimization is presented in section iii .numerical results and discussions are given in section iv .the computational complexity is analyzed in section v. finally , section vi summarizes our conclusions .in this section , we first define the network model , then present the selection region based routing protocol .assume nodes in the network follow a homogenous poisson point process ( ppp ) with density , with slotted aloha being deployed as the medium access control ( mac ) protocol .we also consider the nodes are mobile , to eliminate the spatial correlation , which is also discussed in . during each time slota node chooses to transmit data with probability , and to receive data with probability .therefore , at a certain time instant , transmitters in the network follow a homogeneous ppp ( ) with density , while receivers follow another homogenous ppp ( ) with density . considering multi - hop transmissions , at each hop a transmitter tries to find a receiver in as the relay .we assume that all transmitters use the same transmission power and the wireless channel combines the large - scale path - loss and small - scale rayleigh fading .the normalized channel power gain over distance is given by where denotes the small - scale fading , drawn from an exponential distribution of mean with probability density function ( pdf ) , and is the path - loss exponent . for the transmission from transmitter to receiver ,it is successful if the received signal - to - interference - plus - noise ratio ( sinr ) at receiver is above a threshold .thus the successful transmission probability over this hop with distance is given by where , , is the sum interference from the simultaneous concurrent transmissions , is the distance from interferer to receiver , and is the average power of ambient thermal noise . in the sequel we approximate , which is reasonable in interference - limited ad hoc networks . from , the successful transmission probability from transmitter to receiver derived as where considering a typical multi - hop transmission scenario , where a data source ( s ) sends information to its final destination ( d ) that is located far away , and it is impossible to complete this operation over a single hop . since we assume that nodes are randomly distributed , relays may not be located at an optimum transmission distance as derived in . to guarantee a relay existing at a proper position, we propose a selection region based multi - hop routing protocol .for each transmitter along the route to the final destination , we define a selection region by two parameters : a selection angle and a reference distance , as shown by the shaded area in fig .1 , where the selection region is defined as the region that is located within angle and outside the arc with . here, the transmitter is placed in the circle center , , and points to the direction of the final destination . at each hop , the relay is selected as the nearest receiver node to the transmitter among the nodes in the selection region .the reason that we limit the selection region within an angle is explained as follows : in multi - hop routing , a transmission is inefficient if the projection of transmission distance on the directional line from the transmitter towards the final destination is negative , or less efficient if the projection is positive but very small .therefore , here we set a limiting angle with which each packet traverses at each hop within ] , which is independent of , the expected density of progress is given by & = p\lambda \int\limits_{r_m}^\infty \int\limits_{-\frac{\varphi}{2}}^{\frac{\varphi}{2 } } { { e^ { - p\lambda t{x^2 } } } } x \cos \phi { f_{d}}(x)d\phi dx \nonumber\\ & = \sqrt \lambda p(1 - p)\gamma \left(\frac{3}{2},kr_m^2\right){k^ { - 3/2}}\exp \left(\lambda ( 1 - p)\frac{\varphi } { 2}r_m^2\right)\sin \left(\frac{\varphi } { 2}\right),\end{aligned}\ ] ] where is the pdf of obtained from ( 4 ) , , is defined in ( 3a ) , and is the incomplete gamma function . to optimize the objective function in ( 5 ) , we first assume that is constant , and try to derive the optimum value of . for brevity , in the following derivationwe write the objective function as . setting the derivative with respect to as 0 , after some calculations we have = 0,\end{aligned}\ ] ] where is calculated as thus we have applying ( 8) to ( 6 ) , we obtain note that the above is only the necessary condition for optimality , given the unknown convexity of objective function. however , the global optimum must be among all the roots of the above equation , which can be found numerically . since it is difficult to analytically derive the exact solution for from ( 9 ), we turn to get an upper bound of . since \nonumber\\ & = \frac{1}{2}\exp \left ( - kr_m^2\right)\left(2 + kr_m^2\right),\end{aligned}\ ] ] by ( 9 ), we have therefore , }^2 } } } } { { k\lambda ( 1 - p)\varphi } } .\ ] ] in fig .3 , we compare the upper bound of the optimum with the numerically computed optimal value when . we see that when transmission probability increases , the upper bound becomes tighter . now let s maximize the objective function by jointly optimizing and .rewrite ( 5 ) as for brevity , let us denote as and as . with partial derivatives , we have = 0.\end{aligned}\ ] ] this holds only if since , there is . to simplify things, we can then calculate the derivative with respect to instead of as & = p(1 - p ) ( - ptr_m^2 ) + & = 0 . since the factor in and related to and only , thus we get and . therefore , with ( 14 ) , we have applying ( 16 ) to ( 15 ) , the following holds : \gamma e + \sin \left(\frac{\varphi } { 2}\right){k^ { - 1}}\lambda ptr_m^2\gamma e = 0.\ ] ] after some calculation , ( 17 ) is simplified as since it is hard to derive close - formed solutions for the optimal and , respectively , we implicitly use and to express the optimal as note that scales as , which intuitively makes sense . since as the density increases , the interferers relative distance to the receiver decreases as , it requires a shorter transmission distance by the same amount to keep the required sinr . by applying ( 19 ) in ( 5 ) , we observe that ( 5 ) becomes , where is a constant independent of .this means that the maximum expected density of progress scales as , which conforms to the results in and .in this section , we present some numerical results based on the analysis in section iii .we choose the path - loss exponent as 3 , the node density as 1 , and the outage threshold as 10 db . in figs . 4 and 5 , we plot the expected density of progress vs. the reference distance and the selection angle , with and 0.05 , respectively .we see that for each there exists an optimum selection angle and an optimum reference range when the respective partial derivatives are zero as discussed in section iii . in fig .6 , we plot the optimum selection angle obtained numerically vs. the transmission probability .as shown in the figure , we see that the increment of transmission probability leads to the increase of the optimum selection angle .this can be explained as follows : the increment of transmission probability means the decrement of the number of nodes that can be selected as relays ; therefore the selection angle should be enlarged to extend the selection region . in fig .7 , we compare the optimum reference distance obtained numerically with that derived in ( 19 ) , where is chosen optimally as that in fig .we see that the increment of transmission probability leads to the decease of reference distance .this can be explained as follows : the increment of transmission probability means more simultaneous concurrent transmissions such that the interference will be increased ; therefore the reference transmission distance should be decreased to guarantee the quality of the received signal and the probability of successful transmission . in fig .[ 7 ] , we compare the performance of our routing protocol with that in , and also with the optimized ( with optimum angle ) nearest neighbor routing shown in fig .5 of and the non - optimized ( with an arbitrary angle , e.g. , ) nearest neighbor routing in . from fig . 8, we have the following observations and interpretations : 1)when increases , the performance of our routing protocol becomes close to that of the nearest neighbor routing with an optimum angle .this can be explained from fig . 7 as : when increases , the reference distance tends to be a small value close to zero ; thus our routing scheme degenerates to the nearest neighbor routing .furthermore , our routing protocol shows much better performance than the non - optimized ( with non - optimized angle ) nearest neighbor routing , and this advantage is due to adopting both the optimum selection angle and the optimum reference distance . in this case , the selection region based routing can also be considered as the optimized nearest neighbor routing given the selection angle and the reference distance ; 2 ) we see that when at the optimum selection angle and the optimum reference distance , the optimum transmission probability is approximately 0.05 .although in this paper we mainly focus on the optimization of the selection region , this observation indicates that there also exists an optimum transmission probability with our model , which has been discussed in some other prior literature , e.g. , , , , and .as shown in fig . 8 , for multi - hop ad hoc networks , in terms of the performance metric of expected density of process , baccelli _et al_. s routing strategy is the best , by design at each hop the transmitter chooses a relay that provides the maximum value of progress towards the destination .however , the computational complexity with this protocol might be high , since the transmitter at each hop should compute the successful transmission probability together with the projection of transmission distance , and accordingly evaluate the value of progress towards the destination for each receiver , further choose the one with the greatest value of progress as the relay . in our protocol as we see from fig . 8 , when is small, its performance is close to that of baccelli _ et al ._ , while the computational complexity per hop is reduced significantly : \1 ) the nodes involved in the relay selection process are limited to a small region .as relays are selected from the receiver nodes in the selection region , the number of nodes participating in the relay selection is reduced with a ratio compared with that in .\2 ) unlike that in , where the successful transmission probability , the projection of transmission distance , and further the value of progress towards the destination for each potential relay need to be calculated ; we only need to calculate the distance between the transmitter and the potential relays .also note that our new protocol could be easily implemented by deploying directional antennas in the transmitter , where the spread angle can be set equal to the optimum selection angle . in this case not only the network computational complexity but also the interference will be reduced , which will be addressed in our future work .in this paper , we propose a selection region based multi - hop routing protocol for random mobile ad hoc networks , where the selection region is defined by two parameters , a selection angle and a reference distance . by maximizing the expected density of progress , we present some analytical results on how to refine the selection region . compared with the previous results in , , and , we consider the transmission direction at each hop towards the final destination to guarantee relay efficiency . compared with the protocol in , the optimum selection region defined in this paper limits the area in which the relay is being selected , and the routing computational complexity at each hop is reduced .s. weber , x. yang , j. g. andrews , and g. de veciana , `` transmission capacity of wireless ad hoc networks with outage constraints , '' _ ieee transactions on information theory _51 , no . 12 , pp .40914102 , dec . 2005 .sousa , and j.a .silvester , `` optimum transmission ranges in a direct - sequence spread - spectrum multihop packet radio network , '' _ ieee journal on selected areas in communications _ , vol . 8 , no . 5 , pp . 762 - 771 , jun .1990 .j. g. andrews , s. weber , m. kountouris and m. haenggi , `` random access transport capacity , '' _ ieee transactions on wireless communications _ , submitted .[ online ] available : http://arxiv.org/ps_cache/arxiv/pdf/0909/0909.5119v1.pdf .
|
we propose a selection region based multi - hop routing protocol for random mobile ad hoc networks , where the selection region is defined by two parameters : a reference distance and a selection angle . at each hop , a relay is chosen as the nearest node to the transmitter that is located within the selection region . by assuming that the relay nodes are randomly placed , we derive an upper bound for the optimum reference distance to maximize the expected density of progress and investigate the relationship between the optimum selection angle and the optimum reference distance . we also note that the optimized expected density of progress scales as , which matches the prior results in the literature . compared with the spatial - reuse multi - hop protocol in recently proposed by baccelli _ et al . _ , in our new protocol the amount of nodes involved and the calculation complexity for each relay selection are reduced significantly , which is attractive for energy - limited wireless ad hoc networks ( e.g. , wireless sensor networks ) .
|
a topic that has emerged recently within the field of quantum information is the study of entanglement in spin systems ( see for early examples ) .entanglement is often considered necessarily a low - temperature phenomenon that becomes less important as the temperature is increased .it was therefore surprising that in an example for two qubits was given where the ground state ( ) is separable but the thermal state is entangled at higher temperatures .this behavior can be understood as due to the presence of low - lying excited states that are entangled .thus at least two qualitatively - different entanglement scenarios are possible for two qubits ( apart from the uninteresting case of no entanglement at any temperature ) ; ( i ) the ground state is entangled , and hence the thermal state is entangled at low temperatures up to a critical temperature , , above which it is separable , and ( ii ) the ground state is separable , but the thermal state is entangled for temperatures within some finite range ( and separable again above ) . the question we address here is what other entanglement scenarios are possible for two qubits .we were stimulated to ask this question by which studies the generic behavior of thermal entanglement as a function of temperature .there it was shown that the generic behavior is closed intervals in temperature where the thermal state is separable interspersed with open intervals of entanglement .examples of the two types mentioned above were given for two qubits .also two examples of qubit - qutrit systems ( and , where , are the dimensions of the two subsystems ) were given where there are two distinct entangled regions ; in one case the ground state is entangled , and in the other case the ground state is separable . non - monotonic behavior of thermal entanglement has also been observed for qubit spin chains . ref . studies the reduced state of two nearby qubits in particular spin chains and finds examples of two distinct entangled regions in temperature . uses a multipartite entanglement measure , and observes three regions . in transitions from separability to entanglement are studied as a type of phase transition using geometric arguments about the set of separable states .here we restrict to the case of just two qubits ( as opposed to the reduced state of two qubits out of a large chain ) , and present a class of hamiltonians for which most have a value of the magnetic field strength such that the ground state is entangled and there are two entangled regions .in addition , we present an example of a hamiltonian outside this class that has a separable ground state and , again , two entangled regions .thus we find that all the classes of behavior for the thermal states of qubit - qutrit systems found in ref . are also observed for the two - qubit case .these results raise the question of how many distinct `` entangled regions '' in temperature are possible .our numerical search failed to find hamiltonians with more entangled regions for two qubits , indicating that this is the most complicated behavior .we show that for a class of commonly considered hamiltonians those without magnetic fields it is impossible to obtain more than one entangled region .in addition we derive upper bounds on the number of entangled regions in the general case .this paper is set out as follows . in sec .[ sec : gen ] we present results for the dimer case of the spin - chain hamiltonian studied in ref . , then give our general class of two - qubit hamiltonians . in sec .[ sec : cou ] we give an example of a hamiltonian in our class that does not exhibit two entangled regions , and show that small perturbations are sufficient to give two entangled regions . in sec .[ sec : unen ] we give our example with a separable ground state and two entangled regions .we derive bounds on the total number of entangled regions possible for two qubits in sec .[ sec : upper ] , specialize to the case with zero magnetic field in sec .[ sec : belldiag ] , and then conclude in sec .[ sec : conc ] .the two - qubit case of the spin - chain studied in ref . is of the following form : .\ ] ] where are the pauli sigma matrices acting on qubits , is a coupling constant with dimensions of energy and is a dimensionless parameter corresponding to the magnetic fields experienced by the qubits .the results for this hamiltonian on two - qubits were given in the inset of fig . 4 in an early version of this paper .this figure showed that two entangled regions were obtained , though this aspect of the results was not discussed .an isolated system ( i.e. not exchanging particles with the environment ) in thermal equilibrium with a bath at temperature will reach the canonical - ensemble thermal state given by where and ] , .it is possible to add local unitary operations before and after the hamiltonian without affecting the entanglement of the thermal state .this is because where and are local unitary operations on subsystems 1 and 2 .these local unitaries act to rotate the vectors , giving for some orthogonal matrices .thus if the local operations are chosen such that and are the orthogonal matrices which result from a singular - value decomposition of , then is a positive diagonal matrix . hence is of the form ,\end{aligned}\ ] ] where the are positive , and the are real .note that the local unitaries do not remove the local component of the hamiltonian .if it were possible to use different local unitaries before and after the hamiltonian , the local component of the hamiltonian could be removed entirely .however , this would change the entanglement of the thermal state .hamiltonians of the form are the most general two - qubit hamiltonians for the problem of thermal entanglement .now we introduce a class of hamiltonians which is slightly restricted , in that we require .this is equivalent to requiring the magnetic field to be homogeneous .these hamiltonians may be written as \right\}.\end{aligned}\ ] ] as before , the are positive , and the are real . to determine properties of these hamiltonians , random hamiltonians were generated , and for each it was determined if there exists a value of such that there are two entangled regions .the were chosen at random in the interval , and the at random in the interval . from a sample of of these hamiltonians, it was found that all had a value of such that there are two entangled regions .arbitrary two - qubit hamiltonians were also tested .these were generated according to the gaussian unitary ensemble .each hamiltonian was divided into a local part and a nonlocal part , and hamiltonians of the form were tested .it was found that , out of samples , there were 106 such that there was a value of for which has two entangled regions .this gives the overall probability for this behavior for arbitrary hamiltonians as .although we have shown that it is extremely common for hamiltonians of the form to have two entangled regions , not all exhibit this behavior .for example , consider the interaction with a magnetic field , as studied by wang : \right\}.\end{aligned}\ ] ] wang found that it was possible for the thermal entanglement to be zero for but nonzero for .wang also considered the anisotropic interaction , but without a magnetic field . in neither case were two regions found .it turns out that we can vary the hamiltonians very slightly from this example , and again recover the two entangled regions .for example , consider the anisotropic interaction .\ ] ] for this is the hamiltonian of eq . .however , for equal to just , we again recover the two entangled regions ( see fig . [fig : thermalplot2 ] ) . for two qubits coupled according to ( [ eq : hamiltonian ] ) with .,width=302 ] another perturbation which recovers the two entangled regions is that where the magnetic field is not exactly aligned on the -axis : \right\}.\ ] ] for this is the interaction with a misaligned transverse magnetic field . the concurrence for is shown in fig.[fig : misal ] . even with this very small misalignment in the magnetic field , the distinct entangled regions are again seen . for two qubitscoupled according to ( [ eq : pert ] ) with .,width=302 ]the next most complicated case is that where the ground state is separable , so the thermal state is separable at , but there are still two entangled regions . as local unitaries do not alter the entanglement , one can arbitrarily choose the separable ground state without loss of generality .thus to numerically search for such examples we took the ground state to be .the other eigenstates and eigenenergies were then chosen at random .the example found was ( after rounding the coefficients ) the concurrence as a function of temperature is as shown in fig.[fig : unen ] .there are two distinct regions of entanglement , with a separable ground state .note that there appears to be a finite region without entanglement for low temperature .however , for much of this region the concurrence is extremely small ( less than ) but nonzero .this indicates that the thermal state may be completely separable only for zero temperature , and the entanglement for small temperatures is not observed due to finite precision .we used numerical techniques to search for examples of more complicated scenarios from the hierarchy , i.e. three or more entangled regions , but were unable to find any .of course no numerical technique can be exhaustive so it remains an intriguing possibility that even more entangled regions are possible , even for two qubits .we now show that there is , in fact , a finite upper bound , 17 , on the number of entangled regions for two qubits .however it remains entirely plausible that this bound is not tight and the above examples of two entangled regions represent the most complicated behavior possible for two qubits . to analytically bound the number of entangled regions for two qubits we use the well - known fact that a two - qubit mixed state is entangled or separable depending on whether its partial transpose with respect to one of the qubits has a negative eigenvalue or not .therefore , by solving =0,\ ] ] where denotes the partial transpose with respect to the first subsystem , we find the transitions between entangled and separable thermal states .we may scale the energies so that the minimum energy eigenvalue is zero .if we multiply by , the resulting equation is polynomial in with noninteger powers and terms , where is the sum of any four , possibly nondistinct , energy eigenvalues ; is the number of distinct terms of this form . ] .provided the ratios of the energy eigenvalues are all rational , the polynomial has integer powers in for some constant . to place a bound on the number of solutions , we first take the derivative of the polynomial , then apply descartes rule of signs .the derivative of the polynomial has no more than 35 terms , and so has no more than 34 sign changes .by descartes rule there are no more than ( positive ) zeros of the derivative , and no more than turning points of the polynomial .it is easily seen that , provided the derivative has no more than 34 zeros , there are no more than regions where the polynomial is negative . in the two - qubit case ,the partial transpose has no more than one negative eigenvalue ; thus the determinant is negative if and only if the partial transpose is negative .hence there can be no more than entangled regions .thus we obtain a finite limit on the number of entangled regions , though this is much larger than the number of intervals which have been found numerically . in practicethe number of sign changes is likely to be far less than , although we do not see a way of showing this analytically . in the case where there are irrational ratios of the energy eigenvalues ,the situation is more complicated because the polynomial has noninteger powers . in this case, we can achieve an arbitrarily close approximation of the hamiltonian with rational energies . in the case where a function is the limit of a sequence of functions , it is not possible for to have more turning points than .the only situation where ( where the prime indicates the derivative ) can have more zeros than is when has an extremum which approaches zero in the limit , and is only exactly equal to zero for .however , this zero would correspond to a point of inflection , rather than an extremum , for .thus we find that the polynomial in can have no more than 17 regions where it is negative , and the limit on the number of entangled regions must hold for irrational powers also. for a qubit coupled to a qutrit entangled mixed states must still have a non - positive partial transpose , however there is a complication due to the fact that the partial transpose can have more than one negative eigenvalue .the main problem in this case is that , at a point where ] , but entangled for slightly higher or lower temperatures .this could happen if one of the eigenvalues passes from positive to negative , while another passes from negative to zero to negative .however , despite this possibility it can be seen that the number of turning points of ] ( assuming rational eigenvalues ) .there are therefore no more than turning points , even in the limit of irrational eigenvalues . combined with the fact that the state must be separable at high temperature , this implies that the number of entangled regions can be no higher than ( some of them may be separated by single points in temperature where the system is separable ) . more generally , for the case of two subsystems of arbitrary dimensions , and , one might hope to put a finite upper bound on the number of entangled regions as a function of and .however , in higher dimensions entangled mixed states do not necessarily have a non - positive partial transpose .in fact it has recently been shown that even the problem of distinguishing separable and entangled mixed states is -hard in arbitrary dimension .it is therefore unlikely that this approach will yield upper bounds for higher dimensional systems .although we were unable to definitively answer the question of what entanglement scenarios are possible for an arbitrary two - qubit hamiltonian , we can for a certain class of hamiltonian those that have no local terms ( corresponding to a magnetic field ) , and only interaction terms. a hamiltonian without local terms may be written in the form as in sec .[ sec : gen ] , we can apply local unitaries without altering the entanglement of the thermal state .these simplify the hamiltonian to a form that is diagonal in the bell basis . bell basis is a set of maximally entangled states where , ( up and down spins if the qubit is a spin 1/2 quantum system ) . to determine the behavior of the entanglement, we compare the state at two different temperatures , and , such that .we first note that the eigenvalues for are majorized by those for .it is straightforward to show this result for all bipartite thermal states ( not just those for two - qubit systems ) . for , .therefore , for , we have .taking the energy eigenvalues to be sorted into nondescending order , we have multiplying gives hence which is the result claimed .now , for density operators and such that the eigenvalues for are majorized by those for , we have where the unitaries permute the eigenstates would also need to include a possible change in the eigenstates from to .we do not need that here , because the eigenstates are unchanged . ] .for the specific case where the hamiltonian is diagonal in the bell basis , the eigenstates are just the bell basis states . in this case , if the state is separable , then all states obtained by permuting the eigenstates are also separable . to show this, we first note that it is not necessary to preserve phase when permuting the eigenstates , because any phase cancels out in the density matrix . in order to permute the eigenstates ( without regard for phase ) , it is sufficient to show that it is possible to perform three swaps between eigenstates .all permutations may be constructed from these three swaps .we may obtain three swaps between bell basis pairs using local unitaries as follows : where is the hadamard operation .thus we find that it is possible to perform any permutation of the bell basis using local operations , so if is separable , each of the states is separable .hence the state must be separable .thus , for hamiltonians that are diagonal in the bell basis , the eigenvalues of for are majorized by those for , so if is separable , then so is .thus we can not have a situation where the thermal entanglement increases with temperature . as we may simplify any two - qubit hamiltonian without local terms to a form which is diagonal in the bell basis, this result holds for all two - qubit hamiltonians without local terms .we have presented a class of hamiltonians for which almost all examples have a value of the magnetic field such that the thermal state has two distinct entangled regions .one example of this class previously appeared as a figure in an online paper , but this aspect of the results was not discussed explicitly .this result is somewhat surprising as one may have expected that the small hilbert space for two qubits would mean that only one entangled region were possible .there are , however , particular cases from this class where distinct regions do not occur , for example the isotropic interaction with a transverse magnetic field .however , we find that if the interaction is perturbed only slightly by making it anisotropic or misaligning the magnetic field , distinct regions do occur .this suggests that those cases where the distinct regions do not occur are a set of measure zero in this class .it is also possible for there to be two entangled regions when the ground state is separable .in contrast to the case where there are two regions and an entangled ground state , this behavior is extremely rare .it was necessary to test millions of hamiltonians before an example of this form was found .we have also shown that certain features of the examples are necessary in order to observe the distinct entangled regions .we proved that for hamiltonians without local terms ( i.e. no magnetic field ) the entanglement must necessarily decrease with increasing temperature , so only one entangled region is possible ( at low temperatures ) . for general two - qubit hamiltonians we showed , by considering zeros of the determinant of the partial transpose of the thermal density matrix , that there can be no more than entangled regions .thus arbitrarily many transitions from entanglement to separability are not possible for two qubits ( or a qubit and a qutrit ) .this project has been supported by the australian research council .we thank andrew doherty , michael nielsen , david poulin , and other members of the qisci problem solving group at the university of queensland for valuable discussions .m. a. nielsen , ph.d .thesis , university of new mexico ( 1998 ) , e - print quant - ph/0011036 .m. c. arnesen , s. bose , and v. vedral , , 017901 ( 2001 ) .o. osenda and g. a. raggio , , 064102 ( 2005 ) .t. roscilde , p. verrucchi , a. fubini , s. haas , and v. tognetti , , 167203 ( 2004 ) .l. amico , f. baroni , a. fubini , d. patan , v. tognetti , and p. verrucchi , e - print cond - mat/0602268 .d. cavalcanti , f. g. s. l. brandao , and m. o. terra cunha , e - print quant - ph/0510132 .t. roscilde , p. verrucchi , a. fubini , s. haas , and v. tognetti , e - print cond - mat/0404403v1 . w. k. wootters , , 2245 ( 1998 ) .a. c. doherty , ( private communication ) .w. dur , g. vidal , j. i. cirac , n. linden , and s. popescu , * 87 * , 137901 ( 2001 ) .x. wang , , 012313 ( 2001 ) .a. sanpera , r. tarrach , and g. vidal , * 58 * , 826 ( 1998 ) .m. horodecki , p. horodecki , and r. horodecki , in _ quantum information _ , edited by g. alber _, springer tracts in modern physics vol .173 ( springer - verlag , berlin , 2001 ) , p. 151 ; e - print quant - ph/0109124 .l. gurvits , e - print quant - ph/0303055 .m. a. nielsen , , 436 ( 1999 ) .
|
we have found that for a wide range of two - qubit hamiltonians the canonical - ensemble thermal state is entangled in two distinct temperature regions . in most cases the ground state is entangled ; however we have also found an example where the ground state is separable and there are still two regions . this demonstrates that the qualitative behavior of entanglement with temperature can be much more complicated than might otherwise have been expected ; it is not simply determined by the entanglement of the ground state , even for the simple case of two qubits . furthermore , we prove a finite bound on the number of possible entangled regions for two qubits , thus showing that arbitrarily many transitions from entanglement to separability are not possible . we also provide an elementary proof that the spectrum of the thermal state at a lower temperature majorizes that at a higher temperature , for any hamiltonian , and use this result to show that only one entangled region is possible for the special case of hamiltonians without magnetic fields .
|
the above estimates are the best fit the researchers could deliver , suitable for current needs and not found ( yet ) in literature .the following questions should be addressed in near future : * are there more adequate metrics for ranking proposals in the given setting ?* what are strong and weak aspects of the approach for collective recommendation ? * what thresholds will be the choice of community and will they be adjusted with time ? * are there really no previous formalized model of this setting ? if there is , what comparisons can we make on design , metrics and outcomes ? * to which extent will participation community and public managers legitimize this approach ?* what is the impact of this technological approach in public health care , social participation and the scientific community ?* to which extent society benefits from this continuous voting process ?is it worth the time spent by voters ? how to evaluate this relation in terms of spent and gained resources ?most importantly , this report is being delivered to the civil society and the scientific community for consideration .given the large number of possibilities for the collective ranking procedure , and the proliferation of solutions , research efforts might aim the organization of such procedures .author is grateful to cnpq ( process 140860/2013 - 4 , project 870336/1997 - 5 ) , undp ( contract 2013/00056 , project bra/12/018 ) , snas / sgpr , and the postgraduate committee of the ifsc / usp .thanks to the brazilian social participation community for the conception and practice of this specific voting setting .konstan , joseph a. , et al .`` teaching recommender systems at large scale : evaluation and lessons learned from a hybrid mooc . ''proceedings of the first acm conference on learning@ scale conference .acm , 2014 .
|
in finding the adequate way to prioritize proposals , the brazilian participation community agreed about the measurement of two indexes , one of approval and one of participation . both practice and literature is constantly handled by the experts involved , and the formalization of such model and metrics seems novel . also , the relevance of this report is strengthened by the nearby use of these indexes by the brazilian general secretariat of the republic to raise and prioritize proposals about public health care in open processes . [ [ section ] ] online decision making is a kind of recommendation system with special appeal for online social participation and electronic governments . this poses challenges on the design of such processes regarding validity , security and the adequate indicators . indeed , the processes themselves vary , and the fact that the indexes presented here seem not to be formalized and published is an evidence that such online decision making is very recent phenomena . the main contribution of this report is a modeling for an online voting process with the following characteristics : * proposals might be inserted by voters after the voting phase started . * voting might be extended as a permanent process . in other words , voting on and adding new proposals might be open continuously . * a proposal is presented to a voter one by one as random outcomes of all proposals . * each vote might be of one and only type among : `` approve '' , `` disapprove '' and `` indifferent '' . * voters vote without authentication . * intended mostly for national rankings , but can also be local or have foreign participation . * should result in a ranking of proposals to assist public management . this setting requires care about security and validity . some of which are : * adequate sampling of individual proposals and overall ranking . * registration of the ip address and time of votes to ease detection of automated and other fraudulent efforts . * reasonable use of the outcomes from the voting process . this requires probing the survey being conducted and its purposes . the indexes here presented target indicatives for the brazilian federal government about the most important health care proposals . given the unauthenticated voting , the outcomes might be regarded as reference rankings if data is minimally shared and checked for inadequate data entry ( such as voting by automated scripts or a persistent participant introduced bias ) . [ [ section-1 ] ] the approval index and the participation index of the proposal was defined as : where , and are approval count , disapproval count and exhibition count , respectively . note that ] , and is the count of the `` indifferent '' manifestations received by proposal . also , such and indexes are expected , for each proposal , to be a constant plus a sampling estimate error that should be smaller as raises . this error is thought to be acceptable if is above a threshold established by the participation community and public managers . as an initial decision , the staff agreed to use as to select of all proposals . a threshold can be used as a required level of engagement for proposals to be relevant , while the threshold is used to classify the outcome as `` approved'',``disapproved '' and `` clash '' . more specifically : if a proposal is both sampled and relevant , than it is prioritized . the coherent values of ( or ) and were chosen as standards of the decision model . these are likely to change with implementation and management . thresholds might be dependent on proposal , such as given by meaningful expressions . immediate examples are : these bonds among proposal variables has been discarded by the staff in the present initial steps . many of the online decision processes conceived and practiced resemble our model and have similar measurements to the and indexes . this section presents a collection of models more familiar to the brazilian participatory community , with focus on the mechanisms , not on historical notes . pairwise is part of the tackled paradigm : the ranking procedure accepts new proposals while the voting occurs . even so , pairwise voting is comparative , voter chooses between two proposals at each vote , and this does not fit proposed procedure . appgree software ranks proposals by sampling voters in cycles , each with fewer proposals this is adequate for a range of decision making cases and showcases statistical estimates utility . the system has a separate proposition phase , and relies on an organized group engagement and user identities , which also does not fit current needs . liquid feedback is a very renowned and bleeding edge solution for collective decision making . it relies on delegating your voting count on specific subjects to other people you know or trust . therefore , it does not fit current needs . even so , this framework have precious considerations for our case , such as about ranking and presenting proposals to voters in the most useful ways . a brazilian solution , used in diverse software and specially important as the output of a nation - wide decision making need , is the agora algorithm . it presents a decision procedure in phases ( agenda proposition , deliberations proposition and commenting , voting ) with resolution outcomes . although coherent , this framework requires authentication and might need experimentation and tuning in order to be effective with more than dozens or a few hundreds of participants . there is a number of other solutions for online collaborative prioritization , such as ideascale , kidling , or any flavor of an analytic hierarchy process ( ahp ) . authors hope to better formalize possible solutions ( and found implementations ) , maybe through recommender systems theory .
|
we begin by demonstrating the efficiency of our unique approach to improve the performance of two of the most fragile , but critical infrastructures , namely , the power supply system in europe as well as the global internet at the level of service providers , the so - called point of presence ( pop ) .the breakdown of any of these networks would constitute a major disaster due to the strong dependency of modern society on electrical power and internet . in figs .[ fig : topology]a and [ fig : topology]b we show the backbone of the european power grid and the location of the european pop and their respective vulnerability in figs . [ fig : topology]c and [ fig : topology]d . the dotted lines in figs .[ fig : topology]c and [ fig : topology]d represent the size of the largest connected component of the networks after a fraction of the most connected nodes have been removed . instead of using the static approach to find the most connected nodes at the beginning of the attack, we use a dynamical approach . in this casethe degrees are recalculated during the attack , which corresponds to a more harmful strategy . as a consequence , in their current structure ,the shutdown of only of the power stations and a cut of of pop would affect of the network integrity . in order to avoid such a dramatic breakdown and reduce the fragility of these networks , herewe propose a strategy to exchange only a small number of power lines or cables without increasing the total length of the links and the number of links of each node .these small local changes not only mitigate the efficiency of malicious attacks , but at the same time preserve the functionality of the system . in figs .[ fig : topology]c and [ fig : topology]d the robustness of the original networks are given by the areas under the dashed curves , while the areas under the solid lines correspond to the robustness of the improved networks .therefore , the green areas in figs . [ fig : topology]c and [ fig : topology]d demonstrate the significant improvement of the resilience of the network for any fraction of attack .this means that terrorists would cause less damage or they would have to attack more power stations , and hackers would have to attack more pop in order to significantly damage the system .next , we describe in detail our methodology . usually robustness is measured by the value of , the critical fraction of attacks at which the network completely collapses .this measure ignores situations in which the network suffers a big damage without completely collapsing .we thus propose here a unique measure which considers the size of the largest component during _ all possible _ malicious attacks .malicious raids often consist of a certain fraction of hits and we want to assure that our process of reconstructing networks will keep the infrastructure as operative as possible , even before collapsing .our unique robustness measure , is thus defined as , where is the number of nodes in the network and is the fraction of nodes in the largest connected cluster after removing nodes .the normalization factor ensures that the robustness of networks with different sizes can be compared .the range of possible values is between and , where these limits correspond , respectively , to a star network and a fully connected graph . for a given network ,the robustness could be enhanced in many ways .adding links without any restrictions until the network is fully connected would be an obvious one .however , for practical purposes , this option can be useless since , for example , the installation of power lines between each pair of power plants would skyrocket costs and transmission losses . by associating costs to each link of the network , we must seek for a reconstruction solution that minimizes the total cost of the changes .we also assume that changing the degree of a node can be particularly more expensive than changing edges .these two assumptions suggest keeping invariant the number of links and the degree of each node . under these constraints, we propose the following algorithm to mitigate malicious attacks . in the original networkwe swap the connections of two randomly chosen edges , that is , the edges and , which connect node with node , and node with node , respectively , become and , only if the robustness of the network is increased , i.e. , . note that a change of the network usually leads to an adjustment in the attack sequence .we then repeat this procedure with another randomly chosen pair of edges until no further substantial improvement is achieved for a given large number of consecutive swapping trials . in fig .1 of the si we show numerical tests indicating that the algorithm can indeed yield close to optimal robustness .as described so far , our algorithm can be used to improve a network against malicious attacks while conserving the number of links per node .nevertheless , for real networks with economical constraints , this conservation of degree is not enough since the cost , like the total length of links , can not be exceedingly large and also the number of changes should remain small . therefore , for reconstructing the eu power grid and the worldwide pop , we use an additional condition that the swap of two links is only accepted if the total length ( geographically calculated ) of edges does not increase and the robustness is increased by more than a certain value .figure [ fig : topology1]a shows that , despite these strong constraints , the robustness can be increased by for pop and for the eu grid with only of link changes and by and , respectively , with only .interestingly , although the robustness is clearly improved , we observe that the percolation threshold remains practically the same for both networks , justifying our unique definition for the measure as a robustness criterion . more strikingly , the conductance distribution , which is a useful measure for the functionality of the network , also does not change ( see fig .[ fig : topology1]b ) .this suggests that our optimized network is not only more robust against malicious attacks , but also does not increase the total length of connections without any loss of functionality .+ the success of this method in reconstructing real networks to improve robustness at low cost and small effort leads us to the following question : can we apply our algorithm to design new highly robust networks against malicious attacks ? in this case , since we build the network from the beginning , the number of changes should not represent any limitation , since we are dealing with only a computational problem . for designing , the only constraint which remainsis the invariance of the degree distribution .here we study both artificial scale - free and erds - rnyi networks . in fig .[ fig : size ] we show how the robustness depends on the system size for designed scale - free networks with degree distribution , with and , and erds - rnyi networks with average degree and .one can see that our method is also very efficient in designing robust networks .while the most robust network structure for a given degree distribution is virtually impossible to determine , our study reveals that all networks investigated can be improved significantly ( see fig .[ fig : size ] and fig . 2 in si ) .moreover , as shown in fig .[ fig : onion]a , the robust networks we obtain clearly share a common and unique `` onion - like '' structure consisting of a core of highly connected nodes hierarchically surrounded by rings of nodes with decreasing degree . to quantitativelytest our observation , we calculate the maximal number of nodes with degree which are connected through nodes with a degree smaller or equal to . as shown in fig .[ fig : onion]b , paths between nodes of equal degree , which are not passing through nodes with higher degree , emerge in the robust networks .although at a first glance onion - like networks might look similar to high assortative networks , the later ones are different and can be significantly more fragile ( see fig . 3 in si ) .we also find that onion - like networks are also robust against other kinds of targeted attacks such on high betweenness nodes ( see fig . 4 in si ) .the last topological properties we study are the average shortest path length between two nodes , , and the diameter , , corresponding to the maximal distance between any pair of nodes .counter intuitively , and do not decrease after the optimization , but slightly increase .nevertheless , it seems that both values grow not faster than logarithmically with the system size .( see fig . 5 in si )in summary , we have introduced a unique measure for robustness of networks and used this measure to develop a method that significantly improves , with low cost , their robustness against malicious attacks .our approach has been found to be successfully useful as demonstrated on two real network systems , the european power grid of stations and the internet .our results show that with a reasonably economical effort , significant gains can be achieved for their robustness while conserving the nodes degrees and the total length of power lines or cables . in the case of designing scale - free networks ,a unique `` onion - like '' topology characterizing robust networks is revealed .this insight enables to design robust networks with a prescribed degree distribution .the applications of our results are imminent on one hand to guide the improvement of existing networks but also serve on the other hand to design future infrastructures with improved robustness .we thank t. mihaljev for useful discussions , and y. shavitt and n. zilberman for providing the point of presence internet data .we acknowledge financial support from the eth competence center `` coping with crises in complex socio - economic systems '' ( ccss ) through eth research grant ch1 - 01 - 08 - 2 .s.h . acknowledges support from the israel science foundation , onr , dtra and the epiwork eu project .a.a.m . andj.s.a would like to thank cnpq , capes , funcap , and finep for financial support . * supporting information appendix * + + a. detailed description of our algorithm for fig .1 : our algorithm starts with calculating the robustness of the original pop ( power supply system ) .therefore , the robustness of ( ) independent attacks based on the adaptive calculation of the highest degree nodes are calculated .the average robustness value is assigned to the initial robustness .+ to improve the network s robustness , two nodes and , and two of their neighbors and , are chosen randomly .if the swap of the neighbors , becomes neighbor of and becomes neighbor of , neither creates self - connections nor double - connections , the geographical lengths of the edges and is determined .only if the sum of these lengths is shorter than the sum of the lengths of and , the robustness of the new network is calculated , again averaged over ( ) independent attacks .the swap of the neighbors is accepted , only if it would increase the robustness significantly , whereas the threshold is arbitrary set to ( ) .note that the threshold should be so large that nearly every change is rejected .after testing ( ) independent swaps , the threshold is reduced by a arbitrary factor of .+ then ( ) more swaps are performed with the new threshold , before the threshold is decreased again by a factor of .this loop is repeated times and finally , after ( ) tested swaps , the improved networks , shown in fig . 1 , are obtained .+ note that the including of the decreasing threshold ensures that changes with the largest impact , are performed first . + b.optimal test : in order to test if our algorithm identifies an optimal or near - to - optimal solution , we applied our procedure to three different types of artificial networks , having all the same number of links , but different degree distributions .the first type is the scale - free network , the second is the erds - rnyi network and the third a random regular network having a fixed degree .while the most robust network against malicious attacks for a given degree distribution is unknown , the most robust network for a given number of edges is a network in which all nodes have the same degree .therefore , to test our model we will only impose in our algorithm the constraint of conserving the total number of links , but allowing changes of the degree distribution . in this way, the original swapping mechanism of two connections in the algorithm is replaced by the exchange of a given edge by another one which connects two randomly , but not connected nodes .we find that the robustness of the final networks are practically indistinguishable , as shown in supporting figure 1a .not only their robustness is similar , but also the obtained degree distributions of the networks converge to a delta function around ( see supporting figure 1b ) .these results represent strong indication that our algorithm efficiently finds the structure very close to the most robust network .simulations with different initial realizations of distinct networks types also converged to similar final states in both robustness and degree distribution .+ c. model networks : in supporting figs .2a and 2c we show the robustness of scale - free and erds - rnyi network models .the robustness of the model networks are given by the areas under the dashed curves , while the areas under the solid lines correspond to the robustness of the improved networks .therefore , the green areas indicate the improvement of the resilience of the network for any fraction of attack . as shown in supporting figs .2b and 2d , the overall behavior of the conductance distribution , which is a useful measure for the transport functionality of the network , also does not change .+ d. assortative networks : in supporting fig .3 we show the result of our method when applied to a high assortative network ( ) that does not display onion - like structure .although these two networks have the same degree distribution , their topology is quite different . the percolation thresholds for both networks are close , but the onion - like network is definitely more robust . it is obvious that onion - like and assortativity are distinct properties and that our new robustness measure is significantly more adequate compared to the classical measure .+ e. high betweenness attack : instead of removing the most connected nodes from a network , other attack strategies can also be used .for example , one of the most harmful strategies is the so - called high - betweenness based adaptive attack . in this casethe nodes are removed according to their betweenness in the network after removing each node . in supporting figure 4a and 4b the robustness against this type of attackis shown for scale - free and erds - rnyi networks .note that although the networks are optimized against high degree - based attack , the designed networks become also significantly more resilient to high - betweenness adaptive attack .+ f. properties of onion - like networks : in supporting figs . 5a and5b we show the average shortest path length and the diameter of onion - like networks obtained from scale - free and erds - rnyi network models . while both properties remain similar for erds - rnyi and the improved network , both properties increase for onion - like networks starting from scale - free networks .nevertheless , the diameter and the average shortest path length increase not faster than logarithmic with system size for onion - like networks .
|
terrorist attacks on transportation networks have traumatized modern societies . with a single blast , it has become possible to paralyze airline traffic , electric power supply , ground transportation or internet communication . how and at which cost can one restructure the network such that it will become more robust against a malicious attack ? we introduce a unique measure for robustness and use it to devise a method to mitigate economically and efficiently this risk . we demonstrate its efficiency on the european electricity system and on the internet as well as on complex networks models . we show that with small changes in the network structure ( low cost ) the robustness of diverse networks can be improved dramatically while their functionality remains unchanged . our results are useful not only for improving significantly with low cost the robustness of existing infrastructures but also for designing economically robust network systems . = 1 the vulnerability of modern infrastructures stems from their network structure having very high degree of interconnectedness which makes the system resilient against random attacks but extremely vulnerable to targeted raids . we developed an efficient mitigation method and discovered that with relatively minor modifications in the topology of a given network and without increasing the overall length of connections , it is possible to mitigate considerably the danger of malicious attacks . our efficient mitigation method against malicious attacks is based on developing and introducing a unique measure for robustness . we show that the common measure for robustness of networks in terms of the critical fraction of attacks at which the system completely collapses , the percolation threshold , may not be useful in many realistic cases . this measure , for example , ignores situations in which the network suffers a significant damage , but still keeps its integrity . besides the percolation threshold , there are other robustness measures based , for example , on the shortest path or on the graph spectrum . they are , however , less frequently used for being too complex or less intuitive . in contrast , our unique robustness measure , which considers the size of the largest component during all possible malicious attacks , is as simple as possible and only as complex as necessary . due to the ample range of our definition of robustness , we can assure that our process of reconstructing networks maintains the infrastructure as operative as possible , even before collapsing . +
|
multirate filter banks are widely used as computationally efficient and flexible building blocks for subband signal processing .these filter banks decompose an input signal into its subband components and reconstruct the signal from the downsampled version of these components with little or no distortion . in various applications such as noise reduction ,speech enhancement and audio coding nonuniform time - fre - quency representation is highly desired .a well - known example is approximation of critical bands of human auditory system . by using nonuniform filter banksthis problem can be effectively solved .another problem that is solved by means of nonuniform filter banks is an estimation of the frequency dependent reverberation time .one simple way of obtaining nonuniform filter bank is employing allpass transform to uniform filter bank .efficient nonuniform dft polyphase filter banks were proposed in .however , dft - based filter banks produce complex - value channel signals even for real - value input .therefore the subsequent subband processing becomes more sophisticated .in contrast to dft filter banks allpass transformed ( or warped ) cosine - modulated filter banks ( cmfb ) developed in are allow to avoid having complex subband signals for a real - value input .it is worth to mention that aliasing cancellation conditions do not hold for warped cmfb thus all aliasing components should be suppressed by synthesis filter bank .the paper presents a practical approach to optimal design of multichannel oversampled warped cmfb with low aliasing and amplitude distortions . just like uniform cmfb, analysis and synthesis filters of warped cmfb are obtained from one prototype filter , that results in high design efficiency .a distinguishing characteristic of warped cmfb is that for each channel subsampling factors should be determined separately .the practical rule of selection subsampling factors is also derived .the -channel warped cmfb proposed in based on uniform cmfb developed in .the impulse responses of the analysis ( ] ) filters are cosine - modulated versions of the prototype filter ] is assumed to be multiple of , i.e. . the transfer functions of analysis and synthesis filters can be expressed as follows : [ fb_z_trans ] where , and . the superscript denotes the complex conjugation . in ( [ fb_z_trans ] ) is linear phase lowpass fir filter prototype with cutoff frequency .allpass transformation of cmfb consists in replacing all the delays in uniform cmfb with causal and stable allpass filters : in this paper we consider first - order allpass filter with frequency response .the phase response is written as thus , replacing all terms by by first order allpass filters leads to a transformation of the frequency scale as shown in figure [ fig1 ] .figure [ fig2 ] shows the block diagram of the nonuniform oversampled filter with overall transfer function where distortion transfer function shows amplitude distortion introduced by analysis and synthesis filters , while aliasing distortion is described by an aliasing transfer function . in perfect reconstruction case and nn ] , then let equal this threshold . define the envelope function ] should be based on the same principle as the mechanism of aliasing cancellation _ nonadjacent channels do not overlap_. this means that the range ] defines as follows .the ordered set defines passband frequency of subfilters of uniform cmfb . mapping ( [ ap_transf ] ) allows to obtain corresponding frequency edges of warped cmfb ( figure [ fig_3 ] ) ) .,width=287 ] according to proposed rule the frequency band for -th channel of warped cmfb expressed as for and the following relation holds [ eq_new_rule ] joint use of rule ( [ eq_old_rule])([eq_new_rule ] ) allows determining the subsampling ratios for warped cmfb .let us consider the -channel warped cmfb approximating the psychoacoustic bark scale for standard sampling frequency khz .at first it is necessary to determine warping coefficient . according to for khz .considering rule described in previous section it is possible to choose the subsampling ratios ( figure [ fig2 ] ) such that aliasing does not affect channel signals .for instanse examine the subsampling factor .necessary frequency bounds of uniform cmfb given bellow using ( [ eq_newrule1 ] ) the corresponding frequency range of warped cmfb can be determined as the subsampling factor is obtained by applying ( [ eq_old_rule ] ) similarly , the remaining subsampling factors for filter bank under consideration are selected the proposed design method was implemented using matlab on an intel celeron 2.8 ghz with 1 gb physical memory .it took nine outer loop iteration for the algorithm to converge ( 6 minutes ) .the frequency responses of initial and optimized filter prototypes are plotted in figure [ fig_4 ] .it can be seen that proposed optimization procedure considerably minimized the stopband energy of filter prototype .figure [ fig_5 ] shows the resulting magnitude frequency response of resulting warped cmfb . for a chosen subsampling ratios the magnitude response of aliasing transfer functions for initial and optimized warped cmfb were calculated ( figure [ fig_6 ] ) .it can be seen that aliasing distortion has the same order of magnitude as the stopband attenuation of the prototype filters in figure [ fig_4 ] ..,width=302 ] ) .,width=302 ] the level of aliasing component which appears due to decimation / interpolation of channel signals can be shown using bifrequency system function ( figure [ fig_7][fig_8 ] ) .figure [ fig_8 ] reveals that optimized warped cmfb attenuates aliasing component to the level of -80 - 90 db .overall transfer functions of initial and optimized filter banks are given in figure [ fig_9 ] .overall transfer function of initial warped cmfb suffers from irregular distortion caused by aliasing . with optimized filter prototype the ripples of overall transferare decreased significantly ( from 0.15 db to 0.004 db ) .thus the design example shows that proposed algorithm effectively minimizes the overall distortion introduced by warped cmfb .a practical method for the design of multichannel oversampled warped cmfb with low level of amplitude distortion has been proposed .formulation of optimization problem and imposed constraints on overall filter bank transfer function are allowed to minimize amplitude distortion . also the rule for selection of subsampling factor in warped cmfbis derived . using the proposed design method it is possible to obtain high quality nonuniform filter bank with low distortion level .this work was supported by the leading academic discipline project of shanghai municipal education committee ( j50104 ) and by the belarusian fundamental research fund ( f11ms-037 ) .01 r. e. crochiere , l. r. rabiner , _ multirate digital signal processing _ , prentice hall , englewood cliffs , nj , usa , 1983 .e. galijaevi and j. kliewer , `` design of all - pass - based non - uniform oversampled dft filter banks , '' in _ proc . of intl .conference on acoustics , speech , and signal processing ( icassp ) , _ * vol . 2 * , pp . 11811184 , ( 2002 )k . goh and y .- c .lim , `` an efficient algorithm for the design of weighted minimax m - channel cosine - modulated filter banks , '' _ ieee trans .signal processing _ , * vol .46 , no . 5 * , pp .14261430 , ( 1998 ) .r.d koilpillai and p.p .vaidyanathan , `` cosine - modulated fir filter banks satisfying perfect reconstruction , '' _ ieee trans .signal processing _ , * vol .4 * , pp . 770783 ( 1992 ) .h. w. lllmann and p. vary , `` improved design of oversampled allpass transformed dft filter - banks with near - perfect reconstruction , '' in _ proc . of european signal processing conference ( eusipco ) , _ pp .5054 , ( 2007 ) . h. w. lllmann and p. vary , `` estimation of the frequency dependent reverberation time by means of warped filter - banks , '' in _ proc . of intl .conference on acoustics , speech , and signal processing ( icassp ) , _ pp .309312 , ( 2011 ) . m. parfieniuk and a. petrovsky , `` tunable non - uniform filter bank mixing cosine modulation with perceptual frequency warping by allpass transformation , '' _ automatic control and computer sciences _ , * vol .4 * , pp . 44 - 52 , ( 2004 ) . a. piotrowski and m. parfieniuk ,_ digital filter banks : analysis , synthesis , and implementation for multimedia systems _ , wydawnictwo politechniki bialostockiej , bialystok , poland , 2006 , ( in polish ) .smith and j.s .abel , `` bark and erb bilinear transforms , '' _ ieee trans .speech , audio processing _ , * vol . 7 , no .6 * , pp . 697708 , ( 1999 ) .p. vary , `` digital filter banks with unequal resolution , '' in _ proc .european signal proc ._ , pp . 4142 , ( 1980 ) .
|
a practical approach to optimal design of multichannel oversampled warped cosine - modulated filter banks ( cmfb ) is proposed . warped cmfb is obtained by allpass transformation of uniform cmfb . the paper addresses the problems of minimization amplitude distortion and suppression of aliasing components emerged due to oversampling of filter bank channel signals . proposed optimization - based design considerably reduces distortions of overall filter bank transfer function taking into account channel subsampling ratios . nonuniform filter bank , optimization .
|
rapid - purification protocols increase the rate at which the state of a system is purified by a continuous measurement .they do this by applying feedback control to the system as the measurement proceeds .all such protocols described to date have been devised for continuous measurements of an observable ( that is , measurements that are not dissipative ) . under this kind of measurement the evolution of the system density matrix , ,is given by the stochastic master equation , - k[x,[x , \rho ] ] dt \nonumber \\ & & + \sqrt{2k } ( x\rho + \rho x - 2\langle x \rangle \rho ) dw , \label{eq1}\end{aligned}\ ] ] where is the hermitian operator corresponding to the observable being measured , is the hamiltonian of the system , is gaussian white noise satisfying the ito calculus relation .the observers continuous measurement record , which we will denote by , is given by .this kind of measurement will project the system onto an eigenstate of after a time , where is the difference between the two eigenvalues of that are nearest each other .photon counting and optical homodyning do not fall into the above class of measurements because they subject the system to dissipation .thus if one has a single optical qubit , consisting of a single mode containing no more than one photon , and one measures it with a photon counter , then regardless of whether the measurement tells us that the state was initially or , as the final state is always .if we wish we can think of this as a measurement of the photon number ( that is , a measurement in the class above with ) , followed by an irreversible operation that takes both and to the vaccum .our purpose here is to examine whether there exist rapid - purification feedback protocols for homodyne detection performed on a single optical qubit , and if so , to compare their properties with those pertaining to a continuous measurement of an observable on a single qubit .our motivation is partly theoretical interest regarding the effect of dissipation on rapid - purificaton protocols , and partly to explore whether such protocols can be implemented in an optical setting . before we begin it is worth recalling the properties of the single - qubit rapid - purification protocols that have been derived to date for non - dissipative measurements .the first is the protocol introduced by one of us ( see also ) in which one applies feedback control to speed up the increase in the _ average _ purity of the system .the average here is taken over all possible realizations of the measurement ( all possible measurement records ) .the protocol involves applying feedback during the measurement to keep the bloch vector of the state of the qubit perpendicular to the basis of the measured observable , . in the limit of strong feedback , and high final average purity ,this provides a factor of two decrease in the time required to reach a given average purity . in the limit of strong feedbackthe protocol also eliminates the stochasticity in the purification process , so that the purity increases deterministically .the second protocol , introduced by wiseman and ralph ( see also ) , involves applying feedback to keep the bloch vector parallel to the basis of the measured observable .( if the system has no appreciable hamiltonian , then the measurement will do this of its own accord , and feedback is not required . )this protocol minimizes the _ average time _one has to wait to reach a given purity .the decrease in this average waiting time over the previous protocol is a factor of two , and in this case the evolution of the purity is stochastic . in the next sectionwe examine homodyne detection of a single optical qubit , and derive a deterministic rapid - puritifcation protocol equivalent to the first protocol discussed above . in section [ sec3 ]we calculate the performance of two protocols that are analogous in various ways to the wiseman - ralph protocol .section [ conc ] summarizes with some concluding remarks .the dynamics of a single mode of an optical cavity , where the output light is monitored via homodyne detection , is given by \rho dt + \sqrt{2\eta \gamma } ( a e^{i\theta } \rho + \rho a^\dagger e^{-i\theta } ) dw \nonumber \\ & & - \sqrt{2\gamma } \langle a e^{i\theta } + a^\dagger e^{-i\theta } \rangle \rho dw , \label{smehom } \end{aligned}\ ] ] where \rho \equiv a^\dagger a \rho + \rho a^\dagger a - 2 a \rho a^\dagger ] , and using the above equations we find that + ( \eta-1)(1+z)^2 \right\ } dt + \sqrt{8\eta\gamma } l \left ( x \cos\theta + y\sin\theta \right ) dw .\ ] ] we wish to maximize the rate of decay of by adjusting the phase of the local oscillator , , as the measurement proceeds .inspection of the above equation makes it clear how to do this : we simply need to choose at each time so that .this not only maximizes the rate of decay of , but also eliminates the stochastic terms in and so that the evolutions of both are deterministic .this parallels the behavior of the rapid - purification algorithm in .when we choose at each time to maximize the rate of reduction of , the evolution of becomes .\ ] ] to achieve this we must continually adjust so that ] is the normalization .the true joint probability density for and is given by the product of the gaussian densities and , multiplied by .that is g(r ) h(q ) .\ ] ] to obtain the solution to the inefficient sme we must average over the keeping fixed .this solution is therefore where is merely the normalization . from thiswe see that we need only perform an integration over the gaussian density for , which is straightforward .
|
we present a number of rapid - purification feedback protocols for optical homodyne detection of a single optical qubit . we derive first a protocol that speeds up the rate of increase of the average purity of the system , and find that like the equivalent protocol for a non - disspative measurement , this generates a deterministic evolution for the purity in the limit of strong feedback . we also consider two analogues of the wiseman - ralph rapid - purification protocol in this setting , and show that like that protocol they speed up the average time taken to reach a fixed level of purity . we also examine how the performance of these algorithms changes with detection efficiency , being an important practical consideration .
|
the densification and expansion of wireless networks pose new challenges on interference management and reducing energy consumption . in a dense heterogeneous network ( hetnet ) , base stations ( bss ) are typically deployed to satisfy the peak traffic volume and they are expected to have low activity outside rush hours such as nighttime .there is a high potential for energy saving if bss can be switched off according to the traffic load .obviously , cell activation is coupled with user association : the users in the muted cells must be re - associated with other bss . in addition ,cell muting and user re - association impose further challenges on interference management , since the user may not be connected to the bs with the strongest signal strength .this interference issue can be resolved by interference coordination , i.e. , properly sharing the channels among multiple cells and then distributing them to the associated users in each cell .hence , to obtain energy - efficient resource management strategies , multicell multiuser channel assignment should be integrated into the optimization of the cell activation and user association .however , the resource management that considers the above elements jointly is very challenging mathematically because the inter - cell interference coupling leads to the inherent non - convexity in the optimization problems . to make the problems tractable, the previous studies relied on worst - case interference assumption , average interference assumption , or neglecting inter - cell interference . in these works ,the interference was assumed _ static _ ( or absent ) , i.e. , independent of the resource allocation decisions in each cell , when estimating the user achievable rate .clearly , this is a suboptimal design because the bs deactivation will cause interference fluctuation in the network , hence affecting the user rate .this paper is developing a new framework for energy - efficient resource management to consider the interference coupling caused by cell deactivation .the idea is to pre - calculate the user rate under each possible _ interference pattern _( i.e. an interference scenario in the network , described as one combination of on / off activities of the bss ) , and then perform resource allocation among these patterns .this allocation yields the actual interference and the corresponding user achievable rates that well match the interference at the same time .consider a downlink hetnet , where a number of small cells are embedded in the conventional macro cellular network .the set of all ( macro and small ) cells is denoted by .the cells can be switched on or off every time period ( say , many minutes ) . in this relatively long decision period ,we adopt test points as an abstract concept to represent demands of users .the test points can be chosen from typical user locations , or we can simply partition the geographic region into pixels and then each pixel becomes one test point .the set of test points is denoted by ( in our model , each test point can represent multiple co - located users ) .the traffic demand of each test point is represented by a minimum required average rate during one period of , which is assumed known via traffic estimation algorithms .we are interested in developing adaptive strategies for every period of to accommodate the traffic requirement with minimum network energy consumption , taking into account the inter - cell interference coupling .the enabling mechanism is to characterize the interference by specifying the interference patterns , each of which defines a particular on / off combination of bss .we use the pattern activity vector to indicate the on / off activity of the bss under pattern , where we denote the set of pre - defined patterns by and further define the matrix to combine the activity vectors for all candidate patterns . in order to fully characterize the interference scenarios in a network of cells , generally speaking , patterns are needed . however , since bss with large distance have weak mutual interference , omitting some patterns will not affect the accurate estimation of user achievable rates .we will discuss more on this next ( see proposition 1 in section [ sec : rate - constr - energy ] ) .based on the above pattern definition , we establish a framework to optimize the cell activation , test point ( user ) association and multicell multiuser channel assignment jointly .firstly , the multi - cell channel allocation is translated into partitioning the spectrum across all patterns . in a slow timescale considered in this paper ,all frequency resources can be assumed to have equal channel conditions .denote the spectrum allocation profile by , where represents the fraction of the total bandwidth allocated to pattern and .then the total bandwidth fraction allocated to bs is , where denotes the -th row of the matrix . secondly , denote by the fraction of resources that bs allocates to test point under pattern .naturally , each bs is allowed to use up to resources under pattern for its associated test points , expressed as .note that the association is implicitly indicated by , i.e. , means test point is associated with bs under pattern , while zero value of means that they are not connected . in this formulation ,test point is allowed to be connected to multiple bss .this can be equivalently viewed as multiple users co - located at the same test point , and each bs serves one user individually . in this paper , we assume a single - user detector at each receiver .finally , we define the usage of bs as .the definition of leads to . assuming flat power spectral density ( psd ) of bs transmit power and the noise , the received sinr of the link connecting bs to test point under pattern is where is the cell activation indicator as given in ( [ pattern_actvect ] ) , is the psd of bs , is the received noise psd .we denote the channel gain between bs and test point over the -th frequency resource by where is the large - scale coefficient including antenna gain , path loss and shadowing , and accounts for the small - scale fading .we assume are independent and identically distributed ( i.i.d . ) .hence , the ergodic rate of test point served by the -th bs under pattern can be written as } _ { \triangleq r_{kbi } } = \alpha_{kbi } r_{kbi}\ ] ] where is the system bandwidth , .finally , the total rate of test point is obtained by summing up the contributions from all associated bss and patterns , as note that can be pre - calculted using ( [ ratepercarrier ] ) and hence treated as constants during the optimization .as mentioned previously , the bs usage vector is defined as , where . a typical power consumption model for bss consists of two types of power consumption : fixed power consumption and dynamic power consumption that is proportional to bs s utilization .denote by the maximum operational power of bs if it is fully utilized ( i.e. , ) , which includes power consumption for transmit antennas as well as power amplifier , cooling equipment and so on .we can then express the total power consumption by all bss as rcl [ eq:19 ] p^ = _b where ] , where {- } = \min(0,x)$ ] . after solving by the cutting plane , the primal solution can be found as ( * ? ? ?* ch.6 ) : and , where are the dual variables corresponding to the inequality constraints of , which are available if we solve the problem by off - the - shelf interior - point solvers .finally , the outermost iteration is to adjust the weights according to ( [ eq_weight ] ) and ( [ eq_cellload ] ) and then the problem ( [ p_rateconst_l1_reweighted ] ) is solved again with the new weights until convergence .if problem considering all possible pattern is directly solved by interior - point methods , the complexity is roughly .by contrast , every iteration of the proposed dual algorithm requires finding a solution to by proposition [ prop2 ] , and a solution to by interior - point solvers . specifically, solving requires , while the complexity of solving depends on the number of constraints in , which is increased by one inequality per iteration .our numerical experiment suggests that the number of iterations is roughly proportional to .( this can be explained by the inherent sparsity structure of the solution identified by proposition 1 .since the proposed algorithm activates one pattern per iteration ( see ) , the number of iteration is unsurprisingly much lower than if is large ) .consequently , it is safe to bound the complexity of solving as per iteration .hence , the overall complexity of the proposed algorithm for solving is , much smaller than directly applying interior - point solvers to . in table 1 , we report the algorithm running time for a network consisting of users and cells , where the proposed algorithm outperforms a commercial solver ( gurobi with the barrier method ) , as increases . .algorithm running time . [ cols="^,^,^,^,^",options="header " , ] the cutting plane method should be initialized with a strictly primal feasible solution in terms of , otherwise the master problem will become unbounded in the first iteration .we can solve the following rate balancing problem to test the feasibility of and obtain a strictly primal feasible solution if the original problem is feasible : rcl[p_ratebalancing ] _r _ & & - r _ + & & _ k r _ - _ i _ b _ kbi r_kbi 0 , k [ con_ratebal_1 ] where . note that problem is always feasible .we can again apply cutting plane method to solve it , but without worrying about the initialization ( since we can always decrease to make sure is strictly satisfied ) .we consider a network consisting of 3 macro cells , each of which contains 4 randomly dropped pico cells as shown in fig .[ fig_network ] .the parameters for propagation modeling and simulations follow the suggestions in 3gpp evaluation methodology , and summarized in table i of . based on the linear relationship between transmit power and operational power consumption , where is the transmit power for bs , and if is a macro ;otherwise and if is a pico . ] , we calculate the maximum operational power as 439w and 38w for macro and pico bss , respectively .we further assume each macro bs has a constant power consumption , i.e. , , , and the fixed power consumption of a pico takes of the maximum operational power , i.e. , . note that these assumptions are made for providing concrete numerical results , and they are not from the restriction of our formulation .the baseline strategy in comparison is the energy saving optimization scheme proposed by , where worst - case estimates of the user rates resulted from no intercell interference coordination are used to calculate the qos requirements .this scheme can be cast into the proposed framework by restricting the candidate pattern to a single _ reuse_-1 pattern .fig.[fig_compare ] plots the network power consumption versus the rate requirement of the test points , where and test points are uniformly distributed within the network , and all test points are assumed to have the same requirement for simplicity .as shown , the network power consumption increases with the user rate requirement for both schemes , but the proposed scheme has a significantly power saving compared to the existing reuse-1 scheme . for example , to satisfy / s for 50 test points , the proposed scheme only consumes 200w , whereas the reuse-1 needs more than 1400w .moreover , the maximum rate requirement that the network can support has been greatly improved by the proposed scheme .we observe , for example , the maximum feasible rate in 50-test - point case increases from / s to / s by using the proposed scheme .the performance gains of the proposed strategy come from its ability to manage the interference by resource allocation and explicitly take into account the interference coupling caused by cell ( de)activation when estimating the user rate .in this appendix , the proof of proposition [ prop0 ] is provided . by letting ,the original problem can be equivalently rewritten as rcl[p1_rateconstrained_changevairable ] _ , & & p^= _ b [ prop1_obj ] + & & _ b = _ i _ i _ k _ kbi , b [ prop1_cons_rho ] + & & _ i _ i _ b _ kbi r_kbi d_k , k [ prop1_conqos ] + & & _ k _ kbi 1 , b , i [ prop1_const_bs allo ] + & & _ i _ i = 1 [ prop1_cons_pi ] + & & _ i 0 , i , _ kbi 0 , k , b , i [ prop1_con_nonnegative ] in the following , we show that if an optimal solution exists we can then obtain the same optimal objective with where only has nonzero entries out of with , and , with .then define and . according to and( note that must achieve equality at the optimum , otherwise the objective in can be further reduced ) , the vector , i.e. , a convex combination of vectors , with as coefficients . by caratheodorys theorem , can be represented by at most of those vectors . denoting the resulting coefficients by , we prove the proposition .e. pollakis , r.l.g .cavalcante , and s. stanczak , `` base station selection for energy efficient network operation with the majorization - minimization algorithm , '' in _ signal processing advances in wireless communications ( spawc ) , 2012 ieee 13th international workshop on _ , 2012 , pp .219223 .cavalcante , s. stanczak , m. schubert , a. eisenblaetter , and u. tuerke , `` toward energy - efficient 5 g wireless communications technologies : tools for decoupling the scaling of networks from the growth of operating power , '' , vol .31 , no . 6 , pp . 2434 , 2014 .l. su , c. yang , z. xu , and a.f .molisch , `` energy - efficient downlink transmission with base station closing in small cell networks , '' in _ acoustics , speech and signal processing ( icassp ) , 2013 ieee international conference on _ , 2013 , pp .47844788 .q. kuang , j. speidel , and h. droste , `` joint base - station association , channel assignment , beamforming and power control in heterogeneous networks , '' in _ vehicular technology conference ( vtc spring ) , 2012 ieee 75th _ , 2012 , pp .15 .q. kuang , `` joint user association and reuse pattern selection in heterogeneous networks , '' in _ ieee the eleventh international symposium on wireless communication systems ( iswcs 2014 ) _ , barcelona , spain , august 26 - 29 , 2014 .
|
interference coupling in heterogeneous networks introduces the inherent non - convexity to the network resource optimization problem , hindering the development of effective solutions . a new framework based on multi - pattern formulation has been proposed in this paper to study the energy efficient strategy for joint cell activation , user association and multicell multiuser channel allocation . one key feature of this interference pattern formulation is that the patterns remain fixed and independent of the optimization process . this creates a favorable opportunity for a linear programming formulation while still taking interference coupling into account . a tailored algorithm is developed to solve the formulated network energy saving problem in the dual domain by exploiting the problem structure , which gives a significant complexity saving compared to using standard solvers . numerical results show a huge improvement in energy saving achieved by the proposed scheme . cell activation , user association , power minimization , interference coordination , cutting plane methods
|
a mechanical problem is generally studied through force interactions between masses located in material points : this newton point of view leads together to the statistical mechanics but also to the continuum mechanics .the statistical mechanics is mostly precise but is in fact too detailed and in many cases huge calculations crop up .the continuum mechanics is an asymptotic notion coming from short range interactions between molecules .it follows a loose of information but a more efficient and directly computable theory . in the simplest case of continuum mechanics ,residual information comes through stress tensor like cauchy tensor .the concept of stress tensor is so frequently used that it has become as natural as the notion of force .nevertheless , tensor of contact couples can be investigated as in cosserat medium or configuration forces like in gurtin approach with edge interactions of noll and virga .stress tensors and contact forces are interrelated notions .a fundamental point of view in continuum mechanics is : the newton system for forces is equivalent to _ the work of forces is the value of a linear functional of displacements ._ such a method due to lagrange is dual of the system of forces due to newton and is not issued from a variational approach ; the minimization of the energy coincides with the functional approach in a special variational principle only for some equilibrium cases .the linear functional expressing the work of forces is related to the theory of distributions ; a decomposition theorem associated with displacements ( as test functions whose supports are manifolds ) uniquely determines a canonical zero order form _ ( separated form ) _ with respect both to the test functions and the transverse derivatives of contact test functions . asnewton s principle is useless when we do not have any constitutive equation for the expression of forces , the linear functional method is useless when we do not have any constitutive assumption for the virtual work functional .the choice of the simple material theory associated with the cauchy stress tensor corresponds with a constitutive assumption on its virtual work functional .it is important to notice that constitutive equations for the free energy and constitutive assumption for the virtual work functional may be incompatible : for any _ virtual _ displacement of an isothermal medium , the variation must be equal to the _ virtual _ work of internal forces . the equilibrium state is then obtained by the existence of a solution minimizing the free energy .the equation of motion of a continuous medium is deduced from the _ dalembert - lagrange principle of virtual works _ which is an extension of the principle in mechanics of systems with a finite number of degrees of freedom : _ the motion is such that for any virtual displacement the virtual work of forces is equal to the virtual work of mass accelerations_. let us note : if the virtual work of forces is expressed in classical notations in the form \right\ } dv+\int\int_{s } \mathbf{t}.\,\bfmat{\zeta}\ d{s } \label{viscous fluid}\ ] ] from the dalembert - lagrange principle , we obtain not only the equations of balance momentum for a viscous fluid in the domain but also the boundary conditions on the border of .we notice that expression ( [ viscous fluid ] ) is not the frechet derivative of any functional expression.if the free energy depends on the strain tensor , then must depend on and leads to the existence of the cauchy stress tensor . if the free energy depends on the strain tensor and on the overstrain tensor then must depend on and .+ _ conjugated ( or transposed _ ) mappings being denoted by asterisk , for any vectors , we write for their _ scalar product _ ( the line vector is multiplied by the column vector ) and or for their _ tensor product _ ( the column vector is multiplied by the line vector ) .the product of a mapping by a vector is denoted by .notation means the covector defined by the rule .the divergence of a linear transformation is the covector such that , for any constant vector we introduce a galilean or fixed system of coordinates which is also denoted by as euler or spatial variables . if is a real function of , is the linear form associated with the gradient of and ; consequently , .the identity tensor is denoted by . + now , we present the method and its consequences in different cases of gradient theory . as examples ,we revisit the case of laplace theory of capillarity and the case of van der waals fluids .the motion of a continuous medium is classically represented by a continuous transformation of a three - dimensional space into the physical space . in order to describe the transformation analytically ,the variables which single out individual particles correspond to material or lagrange variables .then , the transformation representing the motion of a continuous medium is where denotes the time . at fixed the transformation possesses an inverse and continuous derivatives up to the second order except at singular surfaces , curves or points .then , the * * * * diffeomorphism from the set of the particles into the physical space is an element of a functional space of the positions of the continuous medium considered as a manifold with an infinite number of dimensions . to formulate the dalembert - lagrange principle of virtual works ,we introduce the notion of _ virtual displacements_. this is obtained by letting the displacements arise from variations in the paths of the particles .let a one - parameter family of varied paths or _ virtual motions _ denoted by and possessing continuous derivatives up to the second order and expressed analytically by the transformation with where is an open real set containing and such that or ( the real motion of the continuous medium is obtained when ) .the derivation with respect to when is denoted by .derivation is named _ variation _ and the _ virtual displacement _ is the variation of the position of the medium .the virtual displacement is a tangent vector to in ( . in the physical space ,the _ virtual displacement _ is determined by the variation of each particle : the _ virtual displacement _ _ of the particle _ is such that when , at ; we associate the field of tangent vectors to where is the tangent vector bundle to at . of is represented by a thick curve and its variation by a thin curve .variation of family of varied paths belongs to , tangent space to at .,width=340 ] the concept of virtual work is purposed in the form : _ the virtual work is a linear functional value of the virtual displacement , _ where denotes the inner product of and ; then , belongs to the cotangent space of at ( .in relation ( [ virtual work of forces ] ) , the medium in position is submitted to the covector denoting all the stresses ; in the case of motion , we must add the inertial forces associated with the acceleration quantities to the volume forces .+ the dalembert - lagrange principle of virtual works is expressed as : + consequently , representation ( [ virtual work of forces ] ) leads to : * theorem : * _ if expression ( [ virtual work of forces ] ) is a distribution in a separated form , the dalembert - lagrange principle yields the equations of motions and boundary conditions in the form _ .among all possible choices of linear functional of virtual displacements , we classify the following ones : the medium fills an open set of the physical space and the linear functional is in the form where denote the covariant components of the volume force ( including the inertial force terms ) presented as a covector .the equation of the motion is the medium fills a set and the surface is the boundary of belonging to the medium ; with the same notations as in section _ 3.1.1 _ , the linear functional is in the form are the components of the surface forces ( tension ) . from eq .( [ grad b0 ] ) , we obtain the equation of motion as in eq .( [ gradient a0 ] ) and the boundary condition , , with the previous notations , the linear functional is in the form where are the components of the stress tensor stokes formula gets back to the model in the separated form where are the components of a covector which is the annulator of the vectors belonging to the tangent plane at the boundary .it is not necessary to have a metric in the physical space ; nevertheless , for the sake of simplicity it is convenient to use the euclidian metric ; the vector of components represents the external normal to relatively to ; the covector is associated with the components .we deduce the equation of motion and the boundary condition the linear functional is expressed in the form stokes formula yields the separated form and we deduce the equation of motion in the same form as eq .( [ gradient a1 ] ) and the boundary condition model is the classical theory for elastic media and fluids in continuum mechanics .the linear functional is expressed in the form where the tensor of components is a new term .the boundary of is a surface shared in a partition of parts of class , ( fig .we denote by the mean curvature of ; the edge of is the union of the limit edges between surfaces and assumed to be of class and is the tangent vector to oriented by ; is the unit external normal vector to in the tangent plane to : let us notice that : has a surface boundary divided in several parts .the edge of is denoted by which is also divided in several parts with end points ., width=340 ] where ; consequently , from integration of the divergence of vector on surfaces we obtain , we emphasize with the fact that corresponds to the normal derivative to denoted .an integration by parts of the term in relation ( [ grad b1 ] ) and taking account of relations ( [ int by parts]-[int surface ] ) implies with the following definitions due to theorem in , the distribution ( s._1 _ ) has a unique decomposition in displacements and transverse derivatives of displacements on the manifolds associated with d and its boundaries : expression is in a separated form .consequently , the equation of motion is and the boundary conditions are term is not reducible to a force : its virtual work is not the product of a force with the displacement ; the term is an _ embedding action_. the linear functional is in the form tensor with is an _overstress tensor_. an integration by parts of the last term brings back to the model , and the virtual work gets the separated form with : and consequently yields the same equation of motion and boundary conditions as in case .the linear functional is in the form this functional yields two integrations successively on and on with terms at the points . with obvious notations , for the same reasons as in section _3.2.3 _ , the virtual work gets the separated form where are the components of at point .the calculations are not expended .they introduce the curvature tensor on and the geodesic curvature of .consequently , associated with volume , surface , line and forces at points ; are embedding efforts of order 1 and 2 on and of order 1 on the edge equation of motion and boundary conditions express that these seven tensorial quantities are null on their domains of values , , and .it is possible to extend the previous presentation by means of more complex medium with _ gradient of order n_. the models introduce embedding effects of more important order on surfaces , edges and points .the _ ( a.n ) _ model refers to a _ ( b.n-1 ) _ model : the fact that boundary surface is ( or is not ) a material surface has now a physical meaning .consequently , we can resume the previous presentation as follows : ) the choice of a model corresponds to specify the part of the algebraic dual in which the efforts are considered : . ) in order to operate with the principle of virtual works and to obtain the mechanical equations in the form , it is no matter that the part of the dual is separating , but it is important the part is separated . )the functionals , , , are not separated : if consists in the data of the fields , , , it is not possible to conclude that the fields are zero . )functionals in , , , are separated : if the fields , , , are continuous then , by using the fundamental lemma of variation calculus , their values must be equal to zero .they are the only functionals we must know for using the principle of virtual works ; it is exactly as for a solid : the torque of forces is only known in the equations of motion . )when the fields are not continuous on surfaces or curves , we have to consider a model of greater order in gradients and to introduce integrals on inner boundaries of the medium . for conservative medium, the first gradient theory corresponds to the compressible case .the theory of fluid , elastic , viscous and plastic media refers to the model .the laplace theory of capillarity in fluids refers to the model . to take into account superficial effectsacting between solids and fluids , we use the model of fluids endowed with capillarity ( ) ; the theory interprets the capillarity in a continuous way and contains the laplace theory of capillarity ; for solids , the model corresponds to `` elastic materials with couple stresses '' indicated by toupin in .liquid - vapor and two - phase interfaces are represented by a material surface endowed with an energy relating to laplace surface tension .the interface appears as a surface separating two media with its own characteristic behavior and energy properties ( when working far from critical conditions , the capillary layer has a thickness equivalent to a few molecular beams ) . the laplace theory of capillarity refers to the model in the form as following : for a compressible fluid with a capillary effect on the wall boundaries , the free energy is in the form where is the fluid specific energy , is the matter density and coefficients are the surface tensions of each surface .surface integrations are associated to the space metric ; the virtual work of internal forces is where is the fluid pressure .the external force ( including inertial forces ) is the body force defined in , the surface force is defined on and the line force is defined on .dalembert - lagrange principle yields the equation of motion and boundary conditions : boundary conditions are _ laplace equation _ and _ young - dupr condition_.for interfacial layers , kinetic theory of gas leads to laws of state associated with non - convex internal energies .this approach dates back to van der waals , korteweg , corresponds to the landau - ginzburg theory and presents two disadvantages .first , between phases , the pressure may become negative ; simple physical experiments can be used to cause traction that leads to these negative pressure values .second , in the field between bulks , internal energy can not be represented by a convex surface associated with density and entropy ; this fact seems to contradict the existence of equilibrium states ; it is possible to eliminate this disadvantage by writing in an anisotropic form the stress tensor of the capillary layer which allows to study interfaces of non - molecular size near a critical point .+ one of the problems that complicates this study of phase transformation dynamics is the apparent contradiction between korteweg classical stress theory and the clausius - duhem inequality .proposal made by eglit , dunn and serrin , casal and gouin and others rectifies this anomaly for liquid - vapor interfaces .the simplest model in continuum mechanics considers a free energy as the sum of two terms : a first one corresponding to a medium with a uniform composition equal to the local one and a second one associated with the non - uniformity of the fluid .the second term is approximated by a gradient expansion , typically truncated to the second order .the model is simpler than models associated with the renormalization - group theory but has the advantage of easily extending well - known results for equilibrium cases to the dynamics of interfaces . + we consider a fluid in contact with a wall .physical experiments prove that the fluid is nonhomogeneous in the neighborhood of .the internal energy is also a function of the entropy . in the case of isothermal motions ,the internal energy is replaced by the free energy . in the mechanical case ,the entropy and the temperature are not concerned by the virtual displacements of the medium .consequently , for isentropic or isothermal motions , where . the fluid is submitted to external forces represented by a potential as a function of eulerian variables . to obtain boundary conditionsit is necessary to know the wall effect .an explicit form for the energy of interaction between surfaces and liquids is proposed in .we denote by the surface density of energy at the wall .the total energy of the fluid is the sum of three potential energies : ( bulk energy ) , ( external energy ) and ( surface energy ) . we have the results ( see appendix ) : with , where , , ( or ) denoting the partial derivative of with respect to ( or ) , ; where and denotes the tangential part of the gradient relatively to . the density in the fluid has a limit value at the wall and is assumed to be a function of only .then , , where is computed on .let us denote ; appendix yields with . then , at equilibrium , . the fundamental lemma of variation calculus associated with separated form ( [ s3 ] ) corresponding to ( ) , yields : from any arbitrary variation such that on , we get this equation is written in the classical form of equation of equilibrium . _it is not the same for the boundary conditions . _we consider a rigid wall ; on , the virtual displacements satisfy the condition .then , at the rigid wall , such that , due to , we deduce the boundary conditions ( [ conditions5]-[conditions5.1 ] ) and there exists a lagrange multiplier such that , the edge of belongs to the solid wall and consequently on , : the integral on is null and does not yield any additive condition .the equilibrium equation ( [ motion5 ] ) is unchanged . on the condition ( [ conditions5 ] )is also unchanged .the only different condition comes from the fact that we do not have anymore the slipping condition for the virtual displacement on , . due to the possible deformation of the wall ,the virtual work of stresses on is where is the stress ( loading ) vector associated with stress tensor of the elastic wall and is the line force due to the elasticity of the line . relation ( [ conditions5.1 ] ) is replaced by we obtain an additive condition on in the form and due to condition ( [ conditions5 ] ) , ( if is the union of edges , is replaced by ) .( [ conditions5 ] ) yields ; the definition of implies : due to the fact that the tangential part of eq .( [ conditions5.1 ] ) is always verified , the only condition comes from eq .( [ conditions5 ] ) ; eq .( [ conditions5.1 ] ) yields the value of the lagrange multiplier and eq .( [ condition6 ] ) the value of . for an elastic ( non - rigid ) wallwe obtain , where and are the tangential and the normal components of .taking into account of eq .( [ conditions5.3 ] ) we obtain the stress values at the non rigid elastic wall .the surface energy is : where and are two positive constants and the fluid density condition at the wall is if we denote by the _ bifurcation fluid density _ at the wall , due to the fact is positive constant , we obtain : if , ( or ) , is positive ( or negative ) and we have a lack ( or excess ) of fluid density at the wall .such media allow to study fluid interfaces and interfacial layers between fluids and solids and lead to numerical and asymptotic methods .the extension to the dynamic case is straightforward : eq .( [ motion5 ] ) yields vector is the acceleration ; boundary conditions ( [ conditions5]-[conditions5.3 ] ) are unchanged .i am grateful to professor tommaso ruggeri for helpful discussions .let be a surface in the 3-dimensional space and its external normal extended locally in the vicinity of by the expression where is the distance of a point to ; for any vector field , we obtain : from and we deduce on , we deduce : for any scalar field and , * let us calculate * ; is a material volume , then with . from ( see ) , due to ( see ) , from eq .( [ a1 ] ) , we deduce immediatly : * let us calculate * ; due to where and are two coordinate lines of we get : where is the image of in a reference space with lagrangian coordinates and is the deformation gradient tensor of components .then , relation ( [ a0 ] ) yields : , where belongs to the cotangent plane to ; we obtain
|
motions of continuous media presenting singularities are associated with phenomena involving shocks , interfaces or material surfaces . the equations representing evolutions of these media are irregular through geometrical manifolds . + a unique continuous medium is conceptually simpler than several media with surfaces of singularity . to avoid the surfaces of discontinuity in the theory , we transform the model by considering a continuous medium taking into account more complete internal energies expressed in gradient developments associated with the variables of state . nevertheless , resulting equations of motion are of an higher order than those of the classical models : they lead to non - linear models associated with more complex integration processes on the mathematical level as well as on the numerical point of view . in fact , such models allow a precise study of singular zones when they have a non negligible physical thickness . this is typically the case for capillarity phenomena in fluids or mixtures of fluids in which interfacial zones are transition layers between phases or layers between fluids and solid walls . within the framework of mechanics for continuous media , we propose to deal with the functional point of view considering globally the equations of the media as well as the boundary conditions associated with these equations . for this aim , we revisit the _ dalembert - lagrange principle of virtual works _ which is able to consider the expressions of the works of forces applied to a continuous medium as a linear functional value on a space of test functions in the form of _ virtual displacements_. at the end , we analyze examples corresponding to capillary fluids . this analysis brings us to numerical or asymptotic methods avoiding the difficulties due to singularities in simpler -but with singularities- models .
|
observations of secondary eclipses in exoplanetary systems , starting with hd 209458b ( * ? ? ?* ( deming et al . 2005 ) ) and tres-1b ( * ? ? ?* ( charbonneau et al . 2005 ) ) , made it possible to estimate the integrated day - side brightness of transiting exoplanets .constraining the _ global _ brightness map of exoplanets , on the other hand , requires observations at various orbital phases , involving more sophisticated calibration of observations , much longer observing campaigns , or both .the first measurements of thermal phase curves for exoplanet systems were reported by ( * ? ? ?* harrington et al . ( 2006 ) ) , which reported a large phase function for andromeda b , and ( * ? ? ?* cowan , agol & charbonneau ( 2007 ) ) , which detected a phase function for hd 179949b , and obtained useful upper limits for hd 209458b and 51 peg b. these results proved valuable in constraining the day - night brightness contrast and hence the energy recirculation efficiency of those planets and indicated that hot jupiters represent a heterogeneous group .those first two studies , however , had very incomplete phase coverage ( 5 epochs for the ( * ? ? ? *harrington et al . 2006 ) campaign , and 8 epochs for each of the ( * ? ? ?* cowan et al . 2007 ) campaigns ) .furthermore , three of the four observed planets were not in transiting systems , and the one transiting system ( hd 209458 ) was deliberately observed outside of transit or secondary eclipse .the hours of continuous monitoring of hd 189733b presented in ( * ? ? ?* knutson et al . ( 2007 ) ) differs in three important ways from those first detections of phase variations : 1 ) the observed system exhibits transits , so the planet s orbital inclination with respect to the celestial plane is known .2 ) a secondary eclipse of the planet was observed during the course of the observations , making it possible to quantify not just the relative but the _ absolute _ flux of the planet as a function of orbital phase .3 ) the continuous observing campaign , the system s relative proximity to the earth , its favorable contrast ratio , and ingenious corrections for detector systematics conspired to produce the highest s / n light curve of its kind ever measured .although the observations spanned little more than half an orbit of hd 189733b , the unprecedented quality of the light curve enabled us not only to measure the planet s day / night contrast , but also to generate the first ever brightness map of an extrasolar planet .there are three necessary and sufficient conditions for phase function mapping to be feasible ( see also the orginal formalism of ( * ? ? ?* russell , 1906 ) ) : 1 .one must be able to remove from the observed light curve any stellar variability ( eg : star spots rotating into and out of view ) as well as detector systematics ( detector ramps , intra - pixel sensitivity , etc . ) .* knutson et al . ( 2008 ) ) presents the most sophisticated treatment of these effects to date .2 . one must neglect limb darkening in the planet s atmosphere .this is a reasonable approximation at mid - ir wavelengths , leading to errors of less than 1% ( * ? ? ?* ( cowan & agol 2008 ) ) .3 . one must assume that the large - scale weather of the planet is in a steady - state .this means that the global hot spots , cold spots and jet streams do not vary in brightness or shift with respect to the substellar point over a single planetary orbit .this assumption appears to hold at the 510% level for hot jupiters on circular orbits ( agol et al . in prep ) .for an edge - on system we define the phase angle , which corresponds to the observer planet star angle ( at secondary eclipse ; at transit ) , as well as the longitude , , and latitude , , in a rotating frame , such that at the sub - stellar point , at the planet s north pole , and increases in the same sense as .the condition of a steady - state weather pattern can be expressed as requiring that the specific intensity , , is unchanging with time .there are no current observations which can constrain the -dependence of , but for edge - on orbits the latitudinal dependence of the intensity is unimportant since one can define , which represents the flux contribution from an infinitesimal slice of the planet when viewed face - on .the flux , , we observe from the planet at a given orbital phase can then be written as a convolution , , with the piece - wise defined kernel , .the kernel represents the response of the phase function to a delta function in , and it is very broad , with a full width at half - maximum of , as shown by the solid line in the left panel of figure [ slice_kernels ] .the convolution described in the previous section transforms a given longitudinal map into an observed light curve .the more challenging problem if how to reliably de - convolve an observed light curve to obtain the longitudinal map of a planet . in (* cowan & agol ( 2008 ) ) we presented two complementary models ( examples of which are shown in the right panel of figure [ slice_kernels ] ) , described below .* n - slice model : * this model consists of equal - size longitudinal slices of uniform brightness ( think beach ball ) .such maps simplify the convolution , enabling the use of brute force numerical techniques ( least - squares , mcmc , etc . ) to determine the best - fit longitudinal map given an observed light curve .this approach is versatile , easily adapted to non - transiting planets or planets with incomplete light curves .although an n - slice longitudinal map is neither differentiable nor realistic , smoothing the map does not significantly change the resulting light curve . on the other hand ,the brightness of the different slices do not depend on the light curve in a linearly independent fashion , so using too many slices to model a light curve with poor s / n makes the uncertainty in _ all _ of the slices blow up .* sinusoidal model : * sinusoids are orthogonal eigenfunctions of the convolution described in an observed light curve can therefore be decomposed via a fourier expansion , then trivially transformed into a sinusoidal map using equation 7 of ( * ? ? ?* cowan & agol ( 2008 ) ) .sinusoidal longitudinal maps have the advantage of being imminently believable , but for incomplete phase curves the uncertainty in the map does not have have the properties one would like .for example , if a phase function is only obtained for half of a planetary orbit , the uncertainty in the map is no greater for the hemisphere which was not well observed . fortunately , warm spitzer s propensity for longer observing campaigns will be perfectly suited for obtaining full phase curves ( * ? ? ?* ( deming et al . 2007 ) ) . , and sinusoidal maps , while the right panel shows the resulting phase variationsthe higher - frequency modes are damped out because a full hemisphere is visible at any point in time.,scaledwidth=50.0% ]the sinusoidal model provides an instructive tool for studying the mapping problem , since the maps and associated light curves are simple analytic functions .figure [ sinusoidal_maps ] shows how the smoothing kernel suppresses high - frequency spatial brightness variations .this is a direct consequence of seeing half of the planet at a time .technically , one only sees of the planet particularly well at any point in time ( recall the fwhm ) .this leads to the pernicious problem shown in figure figure [ odd_sinusoidal_maps ] : the kernel entirely wipes out odd sinusoidal modes ( except for ) .in other words , if a planet s dominant weather consisted of three equally spaced hot spots near its equator , it would exhibit _ no _ phase variations !the invisibility of odd modes is not merely an intellectual curiosity : it sets a hard limit on the accuracy of longitudinal maps .if the modes in the planet s longitudinal brightness profile are not visible , there is not much to be gained by extending the fourier expansion to , etc .those modes may well be precisely measured , but this will do nothing to increase the _ accuracy _ of the resulting planet map . to flip this problem on its head , a simple way to test the assumptions of 2 is to look for modes in the observed light curve .the bottom line is that one can do no better than a second - order fourier expansion of an observed light curve : . by the same token ,a limit of 5 free parameters ( 4 slices a phase offset , or just 5 slices ) applies to the n - slice maps .maps with many more parameters than this can be made , but should be treated with skepticism . ,d. , agol , e. , charbonneau , d. , cowan , n. b. , knutson , h. , & marengo , m. 2007 , in american institute of physics conference series , vol .943 , american institute of physics conference series , ed .l. j. storrie - lombardi & n. a. silbermann , 89100
|
one of the most exciting results of the spitzer era has been the ability to construct longitudinal brightness maps from the infrared phase variations of hot jupiters . we presented the first such map in ( * ? ? ? * knutson et al . ( 2007 ) ) , described the mapping theory and some important consequences in ( * ? ? ? * cowan & agol ( 2008 ) ) and presented the first multi waveband map in ( * ? ? ? * knutson et al . ( 2008 ) ) . in these proceedings , we begin by putting these maps in historical context , then briefly describe the mapping formalism . we then summarize the differences between the complementary n - slice and sinusoidal models and end with some of the more important and surprising lessons to be learned from a careful analytic study of the mapping problem .
|
the calculation of electrostatic interactions in computer - simulation studies of condensed - matter systems poses serious problems regarding accuracy and efficiency .these are mainly caused by the infinite range of coulomb interactions in conjunction with the finite size of the samples studied . to avoid system - size effects ,periodic boundary conditions are employed .the natural description of the electrostatics in this periodic space is obtained by summation of the charge interactions over periodically replicated simulation cells .this yields the well - known formula of ewald for the coulomb energy of charges in a lattice .ewald summation is widely used in computer simulations of charged systems and is generally believed to give the most accurate description of the electrostatics ( with respect to system - size dependence ) .the ewald formula splits the coulomb energy into two rapidly converging real- and fourier - space lattice sums , where their relative contributions can be controlled by a parameter .however , in numerical implementations one has to apply truncations of the two ( infinite ) lattice sums . in this work ,we develop a quantitative description of the errors arising from this truncation .we then discuss the choice of cutoff distances in real and fourier space with respect to numerical accuracy , resulting in restrictions regarding computational efficiency of ewald - sum implementations .in particular , we analyze the connection between the real - space screening parameter and the number of fourier - space vectors required for a given accuracy .we also derive an approximate upper limit for an efficient choice of the fourier - space cutoff .the ewald - summation formula for the coulomb energy of charges ( with net charge 0 ) at positions can be expressed as a sum over pair interactions and self terms , ~ , \nonumber\\ \label{eq : uew}\end{aligned}\ ] ] where , with the lattice vector chosen such that is a vector in the unit cell .this result holds for a background dielectric constant ( or vanishing dipole moment of the cell ) , as discussed in ref .the effective pair interaction has the following form , where is the volume of the box , erfc is the complementary error function , and .the two lattice sums extend over real and reciprocal ( fourier ) space lattice vectors and , respectively . in most practical applications ,the screening parameter is chosen such that only and a few hundred vectors need to be considered .the convergence parameter can be chosen arbitrarily ( ) , as the choice of gives different weights to the two sums .however , the requirement of numerical accuracy imposes some restrictions on the choice of .any truncation of the two lattice sums in eq .( [ eq : phi ] ) results in deviations from the identity eq .( [ eq : dudeta ] ) .this provides us with a measure for the accuracy of a given implementation characterized by a screening parameter and two cutoff distances and for the real- and fourier - space lattice sums ( , ) .the errors of the total energy and of the single - particle energies are weighted sums of numerical errors in the electrostatic interaction , in most practical cases , we expect a considerably smaller error than indicated by the upper bounds of eqs .( [ eq : utot - mdep ] ) and ( [ eq : using - mdep ] ) .a detailed analysis of the errors in disordered charge configurations has been presented by kolafa and perram . in this work ,we focus on a model- and configuration - independent measure of the numerical accuracy of truncated ewald sums .this general error analysis provides insight into the influence of the ewald - sum parameters , , and ( and , possibly , a real - space cutoff ) .we do not analyze effects of partial error cancellation owing to , e.g. , charge ordering .we will study a system of two particles with charges , at positions and .we will restrict our analysis to the most widely used cubic cell .calculations will be done in reduced coordinates with a lattice constant of 1 , resulting in assuming integer values and .we define as the maximum deviation from the identity eq .( [ eq : dudeta ] ) with respect to vectors in the cell . will serve as our measure for the numerical accuracy of a truncated ewald sum . from eq .( [ eq : uew ] ) we obtain \right .\nonumber\\ & & -\left .\sum_{\scriptstyle { \bf k } \atop \scriptstyle 0<k\leq k } \frac{2\pi}{\eta^3}\exp(-k^2/4\eta^2)\left[1- \exp(i{\bf k}\cdot{\bf r})\right]\right|~. \label{eq : delta}\end{aligned}\ ] ] using the identity eq .( [ eq : dudeta ] ) for the full ewald sum ( ) , we can invert the sign and sum over the complementary - and -space regions and , respectively .we approximate the -space contributions neglected in eq .( [ eq : delta ] ) by an integral , \right\}}\nonumber\\ & & \approx \max_{{\bf r}\in v}\left\ { \frac{1}{\pi\eta^3}\int_{k}^{\infty}dk\,k^2\ , \exp(-k^2/4\eta^2)\left[1-\frac{\sin(kr)}{kr}\right]\right\}~ , \label{eq : deltaint}\end{aligned}\ ] ] noting that .this results in an approximate expression for the neglected -space contributions , we obtain by integration of eq .( [ eq : deltaint ] ) replacing by its maximum value ( , typically , is large ( ) and only one simulation cell is considered in space ( ) . regarding the sign of the real - space contributions for , ~ , \label{eq : deltar}\end{aligned}\ ] ] found to be positive in extensive numerical tests . expressed in terms of theta functions using jacobi s imaginary transformation , we conjecture \geq \exp[-\eta^2 ( x^2 + y^2 + z^2 ) ] - 1~ ,\label{eq : ineq}\end{aligned}\ ] ] where , , and ^ 3 ] .based on an analysis of the dielectric properties of polar fluids in periodic space , neumann and steinhauser derived a measure for the effect of a real - space truncation of ewald sums , which for gives ^ 3 $ ] . for large , the deviation from ideality scales as , similar to what we find based on our analysis of .the dependence of on the -space cutoff is depicted in fig .[ fig : deltamax ] . the curves obtained from the approximation eq .( [ eq : deltaapp ] ) are compared with the maxima calculated from points randomly chosen in a cubic cell .we observe excellent agreement of the approximate formula in the range considered ( ) .( [ eq : deltaapp ] ) can therefore be used for assessing the quality of an implementation ( ) of the ewald sum . an interesting observation from fig .[ fig : deltamax ] is that certain values show a particularly small fourier - space errors for all values studied .the values are characterized by a gap in the lattice , i.e. , there do not exist vectors such that .the numerical values are of the form , 14 , 22 , 30 , 38 , and 46 ; although followed by a gap , does not give an optimal cutoff .the inverse of eq .( [ eq : sigma ] ) can be used to obtain the ratio given an error in the -space sum , \ ; ( -\ln\sigma)^{-1/2}~ , \label{eq : siginv}\end{aligned}\ ] ] which is asymptotically correct for small errors , but is already a good approximation for .an important observation is that only ratios enter the formula for the -space error eq .( [ eq : deltak ] ) . correspondingly , to achieve the same accuracy with two values and , the -space cutoff distances have to be chosen proportionally , .the number of vectors for a given value of scales as .thus , the number of vectors required to maintain a given accuracy increases with the third power of the ratios when increasing , from the analysis of the -dependent dielectric constant , neumann proposed a measure for the fourier - space error , which also depends only on and agrees closely with the asymptotic behavior of .we now determine a maximum useful value of , given and .this is obtained from a relation , for which the errors of -space and real - space truncations are equal , such that a further increase in the number of vectors would not significantly reduce the overall error . equating the expressions for the real- and fourier - space errors in a cubic lattice , , we obtain an approximate expression , asymptotically valid for large .( the relative errors of eq .( [ eq : keta ] ) are less than 0.05 and 0.01 for and 5 , respectively . ) in many calculations , a spherical real - space cutoff is introduced in eq .( [ eq : phi ] ) , i.e. , the argument of the sum is multiplied with a unit step function . to find an approximate expression for the numerical error of an implementation where is smaller than half of the box length , we use eq .( [ eq : delta ] ) modified by a function .we approximate the additional real - space contributions to as , which yields ~ , \label{eq : deltaapprc}\end{aligned}\ ] ] analogous to eq .( [ eq : deltaapp ] ) . typically , is large and is chosen smaller than 0.5 , such that the term dominates .we can then invert eq .( [ eq : deltaapprc ] ) to find a generalization of eq .( [ eq : keta ] ) .this gives the -space cutoff at which real- and fourier - space errors are approximately equal , for , we find good agreement for , with the relative error of eq .( [ eq : ketarc ] ) smaller than 2% .we illustrate our error analysis of ewald sums using a study of kusalik , who reports relative errors of dipole - dipole energies for several configurations of a dipolar soft - sphere fluid calculated with , , and and 42 .14 of ref . shows the relative errors ( including the sign ) for the three values as a function of .given , the relative errors are minimal for some values of .the optimal combinations of and the -space cutoff from kusalik s calculations are approximately , and 6 for and 42 .these values are in excellent agreement with those derived from our analysis , with eq .( [ eq : keta ] ) giving and 6.1 for kusalik s values . in an extension of this study ,kusalik reported electrostatic energy , pressure , and dielectric constant of a dipolar soft - sphere system for various ewald - summation parameters ( , , , ) .however , the statistical errors although small do not allow to establish a conclusive picture , since all data are approximately within two estimated standard deviations . large statistical uncertainties of the order of 1020% also prohibit a detailed examination of the errors of the dielectric constant of spc water , as calculated by belhadj et al . . to further demonstrate the quantitative power of the proposed error analysis , we have studied the energies of random configurations of charges in a cubic box ( with net charge 0 ) .energies have been calculated for 10 configurations with and point charges using ewald summation ( and by explicit lattice sums using lattice vectors with ( with the correction for a net dipole moment of the box considered ) .the relative errors in the energy with respect to the lattice sums have been determined for and . for given values of , we determine the screening parameter such that the relative errors assume a minimum .[ fig : keta ] shows the relation between optimal and values together with the derived curve from eq .( [ eq : keta ] ) .we observe excellent qualitative agreement between the derived relation and the observed minima .quantitatively , the results for the random configurations suggest somewhat larger -space cutoff distances for given .however , in most practical applications the charges are more effectively screened by neighboring charges than in random configurations , such that the -space contributions to the energy tend to be smaller , justifying somewhat smaller values .another important point is the relation between and the relative errors in the energy . for ,the results for the random configurations have been used . has been calculated from eqs .( [ eq : deltaapp ] ) and ( [ eq : keta ] ) . fig .[ fig : rhok ] shows minimum values ( for given ) of and as a function of . and closely follow each other , supporting the present error analysis . for the random configurations ,they are proportionally related with a factor of about 100 .the present error analysis has important implications on the choice of the ewald - sum parameters , , and in computer simulations of condensed - matter systems , helping to avoid unnecessary computational effort and minimize the numerical error .the analysis of truncation errors allows to choose , , and on a rational basis . using the accuracy measure , it becomes possible ( i ) to assess the numerical quality of an ewald - sum implementation and ( ii ) to compare different implementations using different parameters .an important application of the ewald - summation error analysis in computer - simulation studies is to optimize the choice of the screening parameter , the real - space cutoff , and the fourier - space cutoff regarding computational speed . using a few typical configurations of the system, one can minimize the computer time for the energy ( or force ) calculation using combinations of and that give the same error . the inversion of eqs .( [ eq : deltaapp ] ) and ( [ eq : deltaapprc ] ) yields the appropriate expressions for , ~,\\ k(\eta,\delta , r_c)&=&\eta\;\sigma^{-1}\!\left\ { \delta - 2\pi^{-1/2}\left[\exp(-\eta^2/4 ) + \exp(-\eta^2 r_c^2)\right]\right\}~,\end{aligned}\ ] ] where is the inverse of , as defined in eq .( [ eq : sigma ] ) . an approximate analytical expression for given in eq .( [ eq : siginv ] ) .this strategy of optimizing pairs along curves of constant error is particularly important in computational studies of large coulombic systems ( e.g. , biomolecules in solution ) , where an overwhelming amount of computer time is spent on the calculation of long - range charge interactions .the author wants to thank a. e. garca and m. neumann for many helpful discussions .this work has been funded by the department of energy ( u.s . ) .
|
ewald summation is widely used to calculate electrostatic interactions in computer simulations of condensed - matter systems . we present an analysis of the errors arising from truncating the infinite real- and fourier - space lattice sums in the ewald formulation . we derive an optimal choice for the fourier - space cutoff given a screening parameter . we find that the number of vectors in fourier space required to achieve a given accuracy scales with . the proposed method can be used to determine computationally efficient parameters for ewald sums , to assess the quality of ewald - sum implementations , and to compare different implementations . -.25 in -.25 in -61pt = 5542.5pc * the numerical accuracy of truncated ewald sums for periodic systems with long - range coulomb interactions * + gerhard hummer + theoretical biology and biophysics group t-10 , ms k710 , + los alamos national laboratory , los alamos , nm 87545 , u.s.a . + ( chemical physics letters : in press , 1995 )
|
nonlinear dimensionality reduction ( nldr ) algorithms address the following problem : given a high - dimensional collection of data points , find a low - dimensional embedding ( for some ) which faithfully preserves the ` intrinsic ' structure of the data . for instance , if the data have been obtained by sampling from some unknown manifold perhaps the parameter space of some physical system then might correspond to an -dimensional coordinate system on . if is completely and non - redundantly parametrized by these coordinates , then the nldr is regarded as having succeeded completely .principal components analysis , or linear regression , is the simplest form of dimensionality reduction ; the embedding function is taken to be a linear projection .this is closely related to ( and sometimes identifed with ) classical multidimensional scaling .when there are no satisfactory linear projections , it becomes necessary to use nldr .prominent algorithms for nldr include locally linear embedding , isomap , laplacian eigenmaps , hessian eigenmaps , and many more .these techniques share an implicit assumption that the unknown manifold is well - described by a finite set of coordinate functions .explicitly , some of the correctness theorems in these studies depend on the hypothesis that has the topological structure of a convex domain in some .this hypothesis guarantees that good coordinates exist , and shifts the burden of proof onto showing that the algorithm recovers these coordinates . in this paperwe ask what happens when this assumption fails . the simplest space which challenges the assumption is the circle , which is one - dimensional but requires two real coordinates for a faithful embedding .other simple examples include the annulus , the torus , the figure eight , the 2-sphere , the last three of which present topological obstructions to being embedded in the euclidean space of their natural dimension .we propose that an appropriate response to the problem is to enlarge the class of coordinate functions to include circle - valued coordinates . in a physical setting , circular coordinates occur naturally as angular and phase variables .spaces like the annulus and the torus are well described by a combination of real and circular coordinates .( the 2-sphere is not so lucky , and must await its day . ) the goal of this paper is to describe a natural procedure for constructing circular coordinates on a nonlinear data set using techniques from classical algebraic topology and its 21st - century grandchild , persistent topology .we direct the reader to as a general reference for algebraic topology , and to for a streamlined account of persistent homology .there have been other attempts to address the problem of finding good coordinate representations of simple non - euclidean data spaces .one approach is to use modified versions of multidimensional scaling specifically devised to find the best embedding of a data set into the cylinder , the sphere and so on .the target space has to be chosen in advance .another class of approaches involves cutting the data manifold along arcs and curves until it has trivial topology .the resulting configuration can then be embedded in euclidean space in the usual way . in our approach , the number of circular coordinates is not fixed in advance , but is determined experimentally after a persistent homology calculation .moreover , there is no cutting involved ; the coordinate functions respect the original topology of the data .the principle behind our algorithm is the following equation from homotopy theory , valid for topological spaces with the homotopy type of a cell complex ( which covers everything we normally encounter ) : = { \operatorname{\mathrm{h}}}^1(x ; { \mathbb{z}})\ ] ] the left - hand side denotes the set of equivalence classes of continuous maps from to the circle ; two maps are equivalent if they are homotopic ( meaning that one map can be deformed continuously into the other ) ; the right - hand side denotes the 1-dimensional cohomology of , taken with integer coefficients . in otherlanguage : is the classifying space for , or equivalently is the eilenberg maclane space .see section 4.3 of . if is a contractible space ( such as a convex subset of ) , then and equation tells us not to bother looking for circular functions : all such functions are homotopic to a constant function .on the other hand , if has nontrivial topology then there may well exist a nonzero cohomology class \in { \operatorname{\mathrm{h}}}^1(x ; { \mathbb{z}}) ] .our strategy divides into the following steps . 1 .represent the given discrete data set as a simplicial complex or filtered simplicial complex .2 . use persistent cohomology to identify a ` significant ' cohomology class in the data . for technical reasons, we carry this out with coefficients in the field of integers modulo , for some prime .this gives us \in { \operatorname{\mathrm{h}}}^1(x ; { \mathbb{f}}_p) ] to a cohomology class with integer coefficients : \in { \operatorname{\mathrm{h}}}^1(x ; { \mathbb{z}}) ] lies in the image of the natural homomorphism .define on the vertices of by setting to be mod . for each edge , we have which is congruent to mod , since is an integer .it follows that can be taken to map linearly onto an interval of signed length .since is a cocyle , can be extended to the triangles as before ; then to the higher cells .proposition [ prop : real ] suggests the following tactic : from an integer cocycle we construct a cohomologous real cocycle , and then define mod on the vertices of . if we can construct so that the edge - lengths are small , then the behaviour of will be apparent from its restriction to the vertices .see section [ sec : smooth ] .we now begin describing the workflow in detail .the input is a point - cloud data set : in other words , a finite set or more generally a finite metric space .the first step is to convert into a simplicial complex and to identify a stable - looking integer cohomology class .this will occupy the next three subsections .the first lesson of point - cloud topology is that point - clouds are best represented by 1-parameter nested families of simplicial complexes .there are several candidate constructions : the vietoris rips complex has vertex set and includes a -simplex whenever all vertices lie pairwise within distance of each other .the witness complex uses a smaller vertex set and includes a -simplex when the vertices lie close to other points of , in a certain precise sense ( see ) . in both cases , whenever .either of these constructions will serve our purposes , but the witness complex has the computational advantage of being considerably smaller .we determine only up to its 2-skeleton , since we are interested in .having constructed a 1-parameter family , we apply the principle of persistence to identify cocycles that are stable across a large range for .suppose that are the critical values where the complex gains new cells .the family can be represented as a diagram of simplicial complexes and inclusion maps .for any coefficient field , the cohomology functor converts this diagram into a diagram of vector spaces and linear maps over ; the arrows are reversed : according to the theory of persistence , such a diagram decomposes as a direct sum of 1-dimensional terms indexed by half - open intervals of the form .each such term corresponds to a cochain that satisfies the cocycle condition for and becomes a coboundary for .the collection of intervals can be displayed graphically as a _ persistence diagram _, by representing each interval as a point in the cartesian plane above the main diagonal .we think of long intervals as representing trustworthy ( i.e. stable ) topological information .choice of coefficients .the persistence decomposition theorem applies to diagrams of vector spaces over a field .when we work over the ring of integers , however , the result is known to fail : there need not be an interval decomposition .this is unfortunate , since we require integer cocycles to construct circle maps . to finesse this problem, we pick an arbitrary prime number ( such as ) and carry out our persistence calculations over the finite field .the resulting cocyle must then be converted to integer coefficients : we address this in section [ sec : lift ] .in principle we can use the ideas in to calculate the persistent cohomology intervals and then select a long interval and a specific .we then let and take to be the cocycle in corresponding to the interval .explicitly , persistent cocycles can be calculated in the following way .we thank dmitriy morozov for this algorithm .suppose that the simplices in the filtered complex are totally ordered , and labelled so that arrives at time .for we maintain the following information : * a set of indices associated with ` live ' cocycles ; * a list of cocycles in .the cocycle involves only and those simplices of the same dimension that appear later in the filtration sequence ( thus only with ) . initially and the list of cocycles is empty . to update from to , we compute the coboundaries of the cocycles of within the larger complex obtained by including the simplex .in fact , these coboundaries must be multiples of the elementary cocycle ]. if all the are zero , then we have one new cocycle : let and define ] .we define smoothness .each of the spaces comes with a natural euclidean metric : a circle - valued function is ` smooth ' if its total variation across the edges of is small .the terms capture the variation across individual edges ; therefore what we must minimize is .let .there is a unique solution to the least - squares minimization problem moreover , is characterized by the equation , where is the adjoint of with respect to the inner products on .note that if then for any we have which implies that such an must be the unique minimizer . for existence ,note that certainly has a solution if .but this is a standard fact in finite - dimensional linear algebra : for any real matrix ; this follows from the singular value decomposition , for instance . remark .it is customary to construct the laplacian .the twin equations and immediately imply ( and conversely , can be deduced from ) the single equation ; in other words is _harmonic_. the least - squares problem in equation can be solved using a standard algorithm such as lsqr . by proposition [ prop : real ] we can use the solution parameter to define the circular coordinate on the vertices of .this works because the original cocycle has integer coefficients .more generally , if is an arbitrary real cocycle such that \in { \mathop{\rm im}}({\operatorname{\mathrm{h}}}^1(x ; { \mathbb{z } } ) \to { \operatorname{\mathrm{h}}}^1(x ; { \mathbb{r}})) ] to each coordinate .a rips complex was constructed with maximal radius 0.5 , resulting in 23475 simplices .the computation of cohomology finished in 237 seconds .parametrizing at 0.4 yielded a single coordinate function , which very closely reproduces the tautological angle function .parametrizing at 0.14 yielded several possible cocycles .we selected one of those with low persistence ; this produced a parametrization which ` snags ' around a small gap in the data .see figure [ fig : noisycircle ] .+ the left panel in each row shows the histogram of coordinate values ; the middle panel shows the correlation scatter plot against the known angle function ; the right panel displays the coordinate using color .the high - persistence ( ` global ' ) coordinate correlates with the angle function with topological degree 1 .variation in that coordinate is uniformly distributed , as seen in the histogram .in contrast , the low - persistence ( ` local ' ) coordinate has a spiky distribution .another example with circle topology : see figure [ fig : trefoil ] .we picked 400 points distributed along the torus knot on a torus with radii 2.0 and 1.0 .we jittered them by a uniform random variable from ] . a rips complexwas constructed with maximal radius 0.5 , resulting in 76763 simplices .the cohomology was computed in 378 seconds .disjoint circles : 400 points were distributed on circles of radius 1 centered around in the plane .these points were subsequently disturbed by a uniform random variable from ] to each coordinate .we constructed a rips complex with maximal radius , resulting in 61522 simplices .the corresponding cohomology was computed in 209 seconds .+ the two inferred coordinates in this ( fairly typical ) experimental run recover the original coordinates essentially perfectly : the first inferred coordinate correlates with the meridional coordinate with topological degree , while the second inferred coordinate correlates with the longitudinal coordinate with degree . when the original coordinates are unavailable , the important figure is the inferred - versus - inferred scatter plot . in this casethe scatter plot is fairly uniformly distributed over the entire coordinate square ( i.e. torus ) . in other words ,the two coordinates are decorrelated .this is slightly truer ( and more clearly apparent in the scatter plot ) for the two original coordinates .contrast these with the corresponding scatter plots for a pair of circles ( conjoined or disjoint ) .see figure [ fig : elliptic ] . for fun, we repeated the previous experiment with a torus abstractly defined as the zero set of a homogeneous cubic polynomial in three variables , interpreted as a complex projective curve .we picked 400 points at random on , subject to the cubic equation to interpret these as points in , we used the projectively invariant metric for all pairs . with this metric we built a rips complex with maximal radius 0.15 .the resulting complex had 44184 simplices , and the cohomology was computed in 56 seconds .we found two dominant coclasses that survived beyond radius 0.15 , and we computed our parametrizations at the 0.15 mark .the resulting correlation plot quite clearly exhibits the decorrelation which is characteristic of the torus .see figure [ fig : doubletorus ] .we constructed a genus-2 surface by generating 1600 points on a torus with inner and outer radii 1.0 and 3.0 ; slicing off part of the data set by a plane at distance 3.7 from the axis of the torus , and reflecting the remaining points in that plane .the resulting data set has 3120 points . out of these , we pick 400 landmark points , and construct a witness complex with maximal radius 0.6 .the landmark set yields a covering radius and a complex with 70605 simplices .the computation took 748 seconds active computer time .we identified the four most significant cocycles .+ note that coordinates 1 and 4 are ` coupled ' in the sense that they are supported over the same subtorus of the double torus .the scatter plot shows that the two coordinates appear to be completely decorrelated except for a large mass concentrated at a single point .this mass corresponds to the other subtorus , on which coordinates 1 and 4 are essentially constant .a similar discussion holds for coordinates 2 and 3 .the uncoupled coordinate pairs ( 1,2 ) , ( 1,3 ) , ( 2,4 ) , ( 3,4 ) produce scatter plots reminiscent of two conjoined or disjoint circles .we are immensely grateful to dmitriy morozov : he has given us considerable assistance in implementing the algorithms in this paper .in particular we thank him for the persistent cocycle algorithm . thanks also to jennifer kloke for sharing her analysis of a visual image data set ; this example did not make the present version of this paper . finally , we thank gunnar carlsson , for his support and encouragement as leader of the topological data analysis research group at stanford ; and robert ghrist , as leader of the darpa - funded project sensor topology and minimal planning ( stomp ) . m. belkin and p. niyogi .laplacian eigenmaps and spectral techniques for embedding and clustering . in t.diettrich , s. becker , and z. ghahramani , editors , _ advances in neural information processing systems 14 _ , pages 585591 . mit press , cambridge , massachussetts , 2002 .m. dixon , n. jacobs , and r. pless .finding minimal parameterizations of cylindrical image manifolds . in _cvprw 06 : proceedings of the 2006 conference on computer vision and pattern recognition workshop _, page 192 , washington , dc , usa , 2006 .ieee computer society .
|
nonlinear dimensionality reduction ( nldr ) algorithms such as isomap , lle and laplacian eigenmaps address the problem of representing high - dimensional nonlinear data in terms of low - dimensional coordinates which represent the intrinsic structure of the data . this paradigm incorporates the assumption that real - valued coordinates provide a rich enough class of functions to represent the data faithfully and efficiently . on the other hand , there are simple structures which challenge this assumption : the circle , for example , is one - dimensional but its faithful representation requires two real coordinates . in this work , we present a strategy for constructing circle - valued functions on a statistical data set . we develop a machinery of persistent cohomology to identify candidates for significant circle - structures in the data , and we use harmonic smoothing and integration to obtain the circle - valued coordinate functions themselves . we suggest that this enriched class of coordinate functions permits a precise nldr analysis of a broader range of realistic data sets . [ geometric ]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.